WO2018038553A1 - 이동 로봇 및 그 제어방법 - Google Patents
이동 로봇 및 그 제어방법 Download PDFInfo
- Publication number
- WO2018038553A1 WO2018038553A1 PCT/KR2017/009260 KR2017009260W WO2018038553A1 WO 2018038553 A1 WO2018038553 A1 WO 2018038553A1 KR 2017009260 W KR2017009260 W KR 2017009260W WO 2018038553 A1 WO2018038553 A1 WO 2018038553A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- obstacle
- image
- mobile robot
- unit
- recognition
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 90
- 238000003860 storage Methods 0.000 claims abstract description 34
- 238000001514 detection method Methods 0.000 claims description 44
- 238000010801 machine learning Methods 0.000 claims description 43
- 230000033001 locomotion Effects 0.000 claims description 38
- 238000012545 processing Methods 0.000 claims description 12
- 230000004044 response Effects 0.000 claims description 5
- 238000013528 artificial neural network Methods 0.000 description 49
- 238000013527 convolutional neural network Methods 0.000 description 34
- 238000004140 cleaning Methods 0.000 description 31
- 238000013135 deep learning Methods 0.000 description 29
- 230000008569 process Effects 0.000 description 21
- 239000000284 extract Substances 0.000 description 17
- 230000008859 change Effects 0.000 description 12
- 238000010586 diagram Methods 0.000 description 12
- 238000004891 communication Methods 0.000 description 11
- 239000000428 dust Substances 0.000 description 10
- 230000003287 optical effect Effects 0.000 description 9
- 238000012549 training Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 7
- 238000000605 extraction Methods 0.000 description 7
- 230000001133 acceleration Effects 0.000 description 6
- 238000013473 artificial intelligence Methods 0.000 description 6
- 230000004888 barrier function Effects 0.000 description 6
- 238000009826 distribution Methods 0.000 description 6
- 231100001261 hazardous Toxicity 0.000 description 5
- NJPPVKZQTLUDBO-UHFFFAOYSA-N novaluron Chemical compound C1=C(Cl)C(OC(F)(F)C(OC(F)(F)F)F)=CC=C1NC(=O)NC(=O)C1=C(F)C=CC=C1F NJPPVKZQTLUDBO-UHFFFAOYSA-N 0.000 description 5
- 230000000306 recurrent effect Effects 0.000 description 5
- 238000001914 filtration Methods 0.000 description 4
- 238000005070 sampling Methods 0.000 description 4
- 230000004913 activation Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 239000003550 marker Substances 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000013500 data storage Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 241000282412 Homo Species 0.000 description 1
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 1
- 238000010521 absorption reaction Methods 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000003925 brain function Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004138 cluster model Methods 0.000 description 1
- 239000011248 coating agent Substances 0.000 description 1
- 238000000576 coating method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000001678 irradiating effect Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000003032 molecular docking Methods 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 230000036544 posture Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000002187 spin decoupling employing ultra-broadband-inversion sequences generated via simulated annealing Methods 0.000 description 1
- 238000000547 structure data Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
-
- A—HUMAN NECESSITIES
- A47—FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
- A47L—DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
- A47L9/00—Details or accessories of suction cleaners, e.g. mechanical means for controlling the suction or for effecting pulsating action; Storing devices specially adapted to suction cleaners or parts thereof; Carrying-vehicles specially adapted for suction cleaners
- A47L9/28—Installation of the electric equipment, e.g. adaptation or attachment to the suction cleaner; Controlling suction cleaners by electric means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
- B25J11/008—Manipulators for service tasks
- B25J11/0085—Cleaning
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
- B25J19/02—Sensing devices
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
- B25J19/02—Sensing devices
- B25J19/021—Optical sensing devices
- B25J19/023—Optical sensing devices including video camera means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J5/00—Manipulators mounted on wheels or on carriages
- B25J5/007—Manipulators mounted on wheels or on carriages mounted on wheels
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
- B25J9/1666—Avoiding collision or forbidden zones
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S15/00—Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
- G01S15/88—Sonar systems specially adapted for specific applications
- G01S15/93—Sonar systems specially adapted for specific applications for anti-collision purposes
- G01S15/931—Sonar systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0221—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0234—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0255—Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0268—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
- G05D1/0272—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means comprising means for registering the travel distance, e.g. revolutions of wheels
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
Definitions
- the present invention relates to a mobile robot and a control method thereof, and more particularly, to a mobile robot and a control method for performing obstacle recognition and avoidance.
- Robots have been developed for industrial use and have been a part of factory automation. Recently, the application of robots has been further expanded, medical robots, aerospace robots, and the like have been developed, and home robots that can be used in general homes have also been made. Among these robots, a moving robot capable of traveling by magnetic force is called a mobile robot.
- a representative example of a mobile robot used at home is a robot cleaner, which is a device that cleans a corresponding area by inhaling dust or foreign matter while driving around a certain area by itself.
- the mobile robot is capable of moving by itself and is free to move, and is provided with a plurality of sensors for avoiding obstacles and the like while driving, and may travel to avoid obstacles.
- an infrared sensor or an ultrasonic sensor is used to detect obstacles of a mobile robot.
- the infrared sensor determines the existence and distance of the obstacles based on the amount of reflected light or the reflected light returning to the obstacle, and the ultrasonic sensor emits an ultrasonic wave having a predetermined period, and the ultrasonic emission time when there is an ultrasonic wave reflected by the obstacle.
- the distance from the obstacle is determined by using the time difference between the moment of returning and the reflection of the obstacle.
- obstacle recognition and avoidance has a great effect on the running performance of the mobile robot as well as the cleaning performance, it is required to ensure the reliability of the obstacle recognition ability.
- Patent No. 10-0669892 discloses a technique for implementing a reliable obstacle recognition technology by combining an infrared sensor and an ultrasonic sensor.
- Patent No. 10-0669892 has a problem that can not determine the nature of the obstacle.
- 1 is a view referred to in the description of the obstacle detection and avoiding method of the conventional mobile robot.
- the robot cleaner sucks dust and foreign substances while moving and performs cleaning (S11).
- the ultrasonic sensor detects the ultrasonic signal reflected by the obstacle and recognizes the presence of the obstacle (S12), and determines whether the height of the recognized obstacle exceeds the height (S13).
- the robot cleaner determines that the height can be exceeded, the robot cleaner moves straight (S14), and if not, rotates the robot 90 degrees (S15).
- the robot cleaner For example, if the obstacle is a low threshold.
- the robot cleaner recognizes the threshold and determines that it can pass, the robot cleaner moves over the threshold.
- the robot cleaner may be caught by the wire while being crossed over the wire.
- the robot cleaner may determine that the obstacle can be overcome. In this case, the robot cleaner may restrain the wheel while turning up the support of the fan.
- the height can exceed the human hair, and it may be carried out straightly. have.
- Machine learning is one of the fields of artificial intelligence, which means to let a computer learn data and perform some tasks such as prediction and classification based on it.
- Deep learning Based on Artificial Neural Networks (ANN), it is an artificial intelligence technology that computers can learn like humans themselves. In other words, deep learning is the computer itself finding and determining characteristics.
- ANN Artificial Neural Networks
- deep learning frameworks include Theano at the University of Montreal, Canada, Torch at the University of New York, USA, Caffe at the University of California, Berkeley, and TensorFlow from Google.
- An object of the present invention is to provide a mobile robot capable of acquiring image data capable of increasing the accuracy of obstacle attribute recognition and a control method thereof.
- An object of the present invention is to provide a mobile robot capable of determining a property of an obstacle and adjusting a driving pattern according to the property of an obstacle, and performing a reliable obstacle recognition and avoidance operation, and a control method thereof.
- An object of the present invention is to improve the stability of the mobile robot itself and the convenience of the user, and to improve the driving efficiency and cleaning efficiency by performing operations such as forward, retreat, stop, and bypass according to the obstacle recognition result. And to provide a control method thereof.
- An object of the present invention is to provide a mobile robot and a control method thereof capable of accurately recognizing the properties of obstacles based on machine learning.
- An object of the present invention is to provide a mobile robot capable of performing machine learning efficiently and extracting data that can be used for obstacle property recognition and a control method thereof.
- a mobile robot includes a moving unit for moving a main body, an image acquisition unit for continuously capturing a plurality of images by continuously photographing the periphery of the main body, and a continuous acquisition for obtaining an image acquisition unit
- a storage unit for storing a plurality of images
- a sensor unit including one or more sensors for detecting an obstacle during movement, and, when the sensor unit detects an obstacle, based on a moving direction and a moving speed of the main body, among the plurality of consecutive images.
- Image data that can increase the accuracy of obstacle attribute recognition by including a controller including an obstacle recognition module that selects a specific viewpoint image before the obstacle detection timing of the sensor unit and recognizes the attribute of the obstacle included in the selected specific viewpoint image. Can be obtained and accurately recognize the nature of the obstacle.
- the mobile robot to achieve the above or another object, by including a driving control module for controlling the driving of the driving unit, based on the properties of the recognized obstacle, stability, user convenience, driving Efficiency and cleaning efficiency can be improved.
- the control method of the mobile robot to achieve the above or another object, the step of acquiring a plurality of images by continuously photographing the periphery of the main body during the movement, through the image acquisition unit, the image acquisition unit is obtained Storing a plurality of continuous images, detecting an obstacle through the sensor unit, and when the sensor detects the obstacle, based on a moving direction and a moving speed of the main body, an obstacle detection time of the sensor unit among the plurality of continuous images
- the method may include selecting a previous specific view image, recognizing a property of an obstacle included in the selected specific view image, and controlling driving of the driving unit based on the recognized property of the obstacle.
- image data capable of increasing the accuracy of obstacle attribute recognition may be acquired.
- the mobile robot may determine the property of the obstacle and adjust the driving pattern according to the property of the obstacle, thereby performing a reliable obstacle recognition and avoiding operation.
- the mobile robot can perform machine learning efficiently and extract data that can be used for obstacle property recognition.
- FIG. 1 is a perspective view of a robot cleaner according to an embodiment of the present invention.
- FIG. 2 is a perspective view of the robot cleaner of FIG. 1 viewed from another angle.
- FIG. 3 is an elevation view of the robot cleaner of FIG.
- FIG. 4 is a view of the robot cleaner of Figure 1 is a reference to the description of the obstacle detection and avoiding method of the conventional mobile robot.
- FIG. 2 is a perspective view showing a mobile robot and a charging table for charging the mobile robot according to an embodiment of the present invention.
- FIG. 3 is a diagram illustrating an upper surface of the mobile robot shown in FIG. 2.
- FIG. 4 is a diagram illustrating a front portion of the mobile robot illustrated in FIG. 2.
- FIG. 5 is a view illustrating a bottom portion of the mobile robot shown in FIG. 2.
- 6 and 7 are block diagrams showing the control relationship between the main components of the mobile robot according to an embodiment of the present invention.
- FIG. 8 is an example of a simplified internal block diagram of a server according to an embodiment of the present invention.
- 9 to 12 are diagrams referred to for describing deep learning.
- FIG. 13 and 14 are views referred to for description of obstacle recognition.
- 15 is a flowchart illustrating a control method of a mobile robot according to an embodiment of the present invention.
- 16 is a flowchart illustrating a control method of a mobile robot according to an embodiment of the present invention.
- 17 to 25 are views referred to for describing a control method of a mobile robot according to an embodiment of the present invention.
- FIG. 26 is a diagram referred to for describing a method of operating a mobile robot and a server, according to an exemplary embodiment.
- FIG. 27 is a flowchart illustrating a method of operating a mobile robot and a server according to an embodiment of the present invention.
- FIG. 28 is a flowchart illustrating a control method of a mobile robot according to an embodiment of the present invention.
- 29 is a flowchart illustrating a control method of a mobile robot according to an embodiment of the present invention.
- 30 to 34 are views referred to for describing a method for controlling a mobile robot according to an embodiment of the present invention.
- module and “unit” for the components used in the following description are merely given in consideration of ease of preparation of the present specification, and do not give particular meanings or roles by themselves. Therefore, the “module” and “unit” may be used interchangeably.
- the mobile robot 100 refers to a robot that can move itself by using a wheel or the like, and may be a home helper robot or a robot cleaner.
- a robot cleaner having a cleaning function among mobile robots will be described with reference to the drawings, but the present invention is not limited thereto.
- FIG. 2 is a perspective view showing a mobile robot and a charging table for charging the mobile robot according to an embodiment of the present invention.
- FIG. 3 is a diagram illustrating an upper surface of the mobile robot shown in FIG. 2
- FIG. 4 is a diagram illustrating a front portion of the mobile robot illustrated in FIG. 2
- FIG. 5 is a bottom portion of the mobile robot illustrated in FIG. 2. It is a degree.
- 6 and 7 are block diagrams showing the control relationship between the main components of the mobile robot according to an embodiment of the present invention.
- the mobile robots 100, 100a and 100b include a main body 110 and image acquisition units 120, 120a and 120b for acquiring images around the main body 110.
- the portion facing the ceiling in the driving zone is defined as the upper surface portion (see FIG. 3), and the portion facing the bottom in the driving zone is defined as the bottom portion (see FIG. 5).
- the front part is defined as a part facing the driving direction among the parts forming the circumference of the main body 110 between the upper and lower parts.
- the mobile robots 100, 100a, and 100b include a driving unit 160 for moving the main body 110.
- the driving unit 160 includes at least one driving wheel 136 for moving the main body 110.
- the driving unit 160 includes a driving motor (not shown) connected to the driving wheel 136 to rotate the driving wheel.
- the driving wheels 136 may be provided at the left and right sides of the main body 110, respectively, hereinafter referred to as left wheels 136 (L) and right wheels 136 (R).
- the left wheel 136 (L) and the right wheel 136 (R) may be driven by one drive motor, but the left wheel drive motor and the right wheel 136 (R) which drive the left wheel 136 (L) as necessary.
- Each right wheel drive motor for driving may be provided.
- the driving direction of the main body 110 can be switched to the left or the right, with a difference in the rotational speed of the left wheel 136 (L) and the right wheel 136 (R).
- An inlet 110h through which air is sucked may be formed in a bottom portion of the main body 110, and an inhalation device (not shown) that provides suction power so that air may be sucked through the inlet 110h in the main body 110. And a dust container (not shown) for collecting dust sucked with air through the suction port 110h.
- the main body 110 may include a case 111 forming a space in which various components constituting the mobile robot 100, 100a, and 100b are accommodated.
- An opening for inserting and removing the dust container may be formed in the case 111, and a dust container cover 112 that opens and closes the opening may be rotatably provided with respect to the case 111.
- the battery 138 supplies not only a driving motor but also power necessary for the overall operation of the mobile robots 100, 100a and 100b.
- the mobile robots 100, 100a and 100b may travel to return to the charging station 200 for charging. During this return driving, the mobile robots 100, 100a and 100b may be used. Can detect the position of the charging station 200 by itself.
- Charging station 200 may include a signal transmitter (not shown) for transmitting a predetermined return signal.
- the return signal may be an ultrasonic signal or an infrared signal, but is not limited thereto.
- the mobile robots 100, 100a, and 100b may include a signal detector (not shown) that receives a return signal.
- the charging unit 200 may transmit an infrared signal through the signal transmitter, and the signal detector may include an infrared sensor that detects the infrared signal.
- the mobile robots 100, 100a and 100b move to the position of the charging stand 200 according to the infrared signal transmitted from the charging stand 200 and dock with the charging stand 200. By the docking, charging is performed between the charging terminal 133 of the mobile robot 100, 100a, 100b and the charging terminal 210 of the charging table 200.
- the mobile robots 100, 100a, and 100b may perform driving of the charging base 200 returning based on image or laser pattern extraction.
- the mobile robots 100, 100a, and 100b may recognize a specific pattern formed on the charging stand 200 by using an optical signal transmitted from the main body 110, extract the same, and return to the charging stand.
- the mobile robot 100, 100a, 100b may include a pattern optical sensor (not shown).
- the pattern light sensor may be provided in the main body 110, and irradiates an optical pattern to an active area in which the mobile robots 100, 100a, and 100b are active, and photographs an area to which the pattern light is irradiated.
- An input image can be obtained.
- the pattern light may be light of a specific pattern such as a cross pattern.
- the pattern light sensor may include a pattern irradiator for irradiating the pattern light and a pattern image acquisition unit for photographing an area to which the pattern light is irradiated.
- the pattern irradiator may include a light source and an optical pattern projection element (OPPE).
- the pattern light generated by transmitting the light incident from the light source through the pattern generator is generated.
- the light source may be a laser diode (LD), a light emitting diode (LED), or the like.
- the pattern irradiator may irradiate light toward the front of the main body, and the pattern image acquirer acquires an input image by capturing an area to which the pattern light is irradiated.
- the pattern image acquisition unit may include a camera, and the camera may be a structured light camera.
- the charging stand 200 may include two or more location marks spaced apart from each other at regular intervals.
- the position mark forms a mark that is distinguished from the peripheral portion when the pattern light is incident on its own surface.
- the marker may be due to the deformation of the shape of the pattern light incident on the surface due to the morphological characteristics of the position marker, and the light reflectance (or absorption) due to the material characteristic of the position marker is different from that of the peripheral portion. It may be caused by flying.
- the location mark may comprise an edge which forms the mark. As the pattern light incident on the surface of the position mark is bent at an angle, the peaks as the marks are identified in the input image.
- the mobile robot 100, 100a, 100b may automatically search for a charging station when the battery level is low. Alternatively, the mobile robot 100, 100a, 100b may perform a charging station search even when a charging command is input from a user.
- the pattern extractor extracts the peaks from the input image, and the controller 140 obtains the location information of the extracted peaks.
- the location information may include a location in a three-dimensional space in consideration of the distance from the mobile robots 100, 100a, and 100b to the peaks.
- the controller 140 obtains the actual distance between the peaks based on the acquired position information on the peaks, and compares it with a preset reference value, and if the difference between the actual distance and the reference value is within a certain range, the charging stand 200 It can be determined that the search was made.
- the mobile robot (100, 100a, 100b), after acquiring the surrounding image through the camera of the image acquisition unit 120, extracts and identifies the shape corresponding to the charging station 200 from the obtained image charging unit 200 Can be returned.
- the mobile robot 100, 100a, 100b may acquire a surrounding image through a camera of the image acquisition unit 120, identify a specific optical signal transmitted from the charging station 200, and return to the charging station.
- the image acquisition unit 120 photographs the driving zone, and may include a camera module.
- the camera module may include a digital camera.
- the digital camera includes at least one optical lens and an image sensor (eg, a CMOS image sensor) configured to include a plurality of photodiodes (eg, pixels) formed by the light passing through the optical lens.
- It may include a digital signal processor (DSP) that forms an image based on the signals output from the photodiodes.
- DSP digital signal processor
- the digital signal processor may generate not only a still image but also a moving image composed of frames composed of the still image.
- the image acquisition unit 120 is provided on the front camera 120a and the upper surface of the main body 110 to acquire an image of the front of the main body 110, and acquires an image of the ceiling in the driving zone.
- the upper camera 120b is provided, the position and the photographing range of the image acquisition unit 120 are not necessarily limited thereto.
- a camera is installed at a portion (ex, front, rear, bottom) of the mobile robot, and the captured image can be continuously acquired during cleaning.
- Such cameras may be installed in several parts for each photographing efficiency.
- the image photographed by the camera may be used to recognize a kind of material such as dust, hair, floor, etc. existing in a corresponding space, whether or not to be cleaned, or to confirm a cleaning time.
- the front camera 120a may capture a situation of an obstacle or a cleaning area existing in the front of the moving direction of the mobile robot 100, 100a, or 100b.
- the image acquisition unit 120 may acquire a plurality of images by continuously photographing the periphery of the main body 110, and the obtained plurality of images may be stored in the storage unit 150. Can be.
- the mobile robots 100, 100a, and 100b may increase the accuracy of obstacle recognition by using a plurality of images, or increase the accuracy of obstacle recognition by selecting one or more images from the plurality of images and using effective data.
- the mobile robot 100, 100a, 100b may include a sensor unit 170 including sensors for sensing various data related to the operation and state of the mobile robot.
- the sensor unit 170 may include an obstacle detecting sensor 131 detecting a front obstacle.
- the sensor unit 170 may further include a cliff detection sensor 132 for detecting the presence of a cliff on the floor in the driving zone, and a lower camera sensor 139 for acquiring an image of the floor.
- the obstacle detecting sensor 131 may include a plurality of sensors installed at predetermined intervals on the outer circumferential surface of the mobile robot 100.
- the sensor unit 170 may include a first sensor disposed on the front surface of the main body 110, a second sensor disposed to be spaced left and right from the first sensor, and a third sensor. .
- the obstacle detecting sensor 131 may include an infrared sensor, an ultrasonic sensor, an RF sensor, a geomagnetic sensor, a position sensitive device (PSD) sensor, and the like.
- the position and type of the sensor included in the obstacle detecting sensor 131 may vary according to the type of the mobile robot, and the obstacle detecting sensor 131 may include more various sensors.
- the obstacle detecting sensor 131 is a sensor that detects a distance between an indoor wall and an obstacle, and the present invention is not limited to the type thereof. Hereinafter, an ultrasonic sensor will be described.
- the obstacle detecting sensor 131 detects an object in the driving (moving) direction of the mobile robot, in particular an obstacle, and transmits obstacle information to the controller 140. That is, the obstacle detecting sensor 131 may detect the moving path of the mobile robot, the protrusion existing in the front or side, the household picking, the furniture, the wall, the wall edge, and the like, and transmit the information to the control unit.
- the controller 140 may detect the position of the obstacle based on at least two signals received through the ultrasonic sensor, and control the movement of the mobile robot 100 according to the detected position of the obstacle.
- the obstacle detecting sensor 131 provided on the outer surface of the case 110 may include a transmitter and a receiver.
- the ultrasonic sensor may be provided such that at least one transmitter and at least two receivers are staggered from each other. Accordingly, the signal may be radiated at various angles, and the signal reflected by the obstacle may be received at various angles.
- the signal received by the obstacle detecting sensor 131 may be subjected to signal processing such as amplification and filtering, and then the distance and direction to the obstacle may be calculated.
- the sensor unit 170 may further include a motion detection sensor for detecting the operation of the mobile robot (100, 100a, 100b) according to the driving of the main body 110 and outputs the operation information.
- a motion detection sensor for detecting the operation of the mobile robot (100, 100a, 100b) according to the driving of the main body 110 and outputs the operation information.
- a gyro sensor for detecting the operation of the mobile robot (100, 100a, 100b) according to the driving of the main body 110 and outputs the operation information.
- a gyro sensor a gyro sensor, a wheel sensor, an acceleration sensor, or the like may be used.
- the gyro sensor detects the rotation direction and detects the rotation angle when the mobile robot 100, 100a, 100b moves in accordance with the driving mode.
- the gyro sensor detects angular velocities of the mobile robots 100, 100a and 100b and outputs a voltage value proportional to the angular velocities.
- the controller 140 calculates the rotation direction and the rotation angle by using the voltage value output from the gyro sensor.
- the wheel sensor is connected to the left wheel 136 (L) and the right wheel 136 (R) to sense the number of revolutions of the wheel.
- the wheel sensor may be a rotary encoder.
- the rotary encoder detects and outputs rotational speeds of the left wheel 136 (L) and the right wheel 136 (R).
- the controller 140 may calculate the rotation speed of the left and right wheels by using the rotation speed. In addition, the controller 140 may calculate the rotation angle by using the difference in the rotation speed of the left wheel 136 (L) and the right wheel 136 (R).
- the acceleration sensor detects a change in speed of the mobile robots 100, 100a and 100b, for example, a change in the mobile robots 100, 100a and 100b according to a start, stop, direction change, collision with an object, and the like.
- the acceleration sensor is attached to the adjacent position of the main wheel or the auxiliary wheel, and can detect slippage or idle of the wheel.
- the acceleration sensor may be built in the controller 140 to detect a change in speed of the mobile robot 100, 100a, 100b. That is, the acceleration sensor detects the impact amount according to the speed change and outputs a voltage value corresponding thereto.
- the acceleration sensor can perform the function of the electronic bumper.
- the controller 140 may calculate a change in position of the mobile robot 100, 100a, 100b based on the motion information output from the motion detection sensor. This position becomes a relative position corresponding to the absolute position using the image information.
- the mobile robot can improve the performance of the position recognition using the image information and the obstacle information through the relative position recognition.
- the mobile robot (100, 100a, 100b) may include a power supply (not shown) for supplying power to the robot cleaner with a rechargeable battery 138.
- the power supply unit supplies driving power and operation power to the components of the mobile robots 100, 100a, and 100b, and when the remaining power is insufficient, the power supply unit may be charged by receiving a charging current from the charging stand 200.
- the mobile robots 100, 100a, and 100b may further include a battery detector (not shown) that detects a charging state of the battery 138 and transmits a detection result to the controller 140.
- the battery 138 is connected to the battery detector so that the battery remaining amount and the charging state are transmitted to the controller 140.
- the remaining battery level may be displayed on the screen of the output unit (not shown).
- the mobile robots 100, 100a, and 100b include an operation unit 137 capable of inputting on / off or various commands.
- the control unit 137 may receive various control commands necessary for the overall operation of the mobile robot 100.
- the mobile robot 100, 100a, 100b may include an output unit (not shown) to display reservation information, a battery state, an operation mode, an operation state, an error state, and the like.
- the mobile robots 100a and 100b include a controller 140 for processing and determining various types of information, such as recognizing a current location, and a storage unit 150 for storing various data.
- the mobile robot (100, 100a, 100b) may further include a communication unit 190 for transmitting and receiving data with the external terminal.
- the external terminal has an application for controlling the mobile robots 100a and 100b, and displays the map of the driving area to be cleaned by the mobile robots 100a and 100b by executing the application, and cleans a specific area on the map. You can specify an area.
- Examples of the external terminal may include a remote controller, a PDA, a laptop, a smartphone, a tablet, and the like, having an application for setting a map.
- the external terminal may communicate with the mobile robots 100a and 100b to display the current location of the mobile robot together with a map, and information about a plurality of areas may be displayed. In addition, the external terminal updates and displays its position as the mobile robot travels.
- the controller 140 controls the overall operation of the mobile robot 100 by controlling the image acquisition unit 120, the operation unit 137, and the driving unit 160 constituting the mobile robots 100a and 100b.
- the storage unit 150 records various types of information necessary for the control of the mobile robot 100 and may include a volatile or nonvolatile recording medium.
- the recording medium stores data that can be read by a microprocessor, and includes a hard disk drive (HDD), a solid state disk (SSD), a silicon disk drive (SDD), a ROM, a RAM, a CD-ROM, a magnetic Tapes, floppy disks, optical data storage devices, and the like.
- the storage unit 150 may store a map for the driving zone.
- the map may be input by an external terminal, a server, or the like, which may exchange information with the mobile robots 100a and 100b through wired or wireless communication, or may be generated by the mobile robots 100a and 100b by learning themselves. have.
- the map may display the locations of the rooms in the driving zone.
- the current positions of the mobile robots 100a and 100b may be displayed on the map, and the current positions of the mobile robots 100a and 100b on the map may be updated during the driving process.
- the external terminal stores the same map as the map stored in the storage 150.
- the storage unit 150 may store cleaning history information. Such cleaning history information may be generated every time cleaning is performed.
- the map of the driving zone stored in the storage unit 150 stores a corresponding navigation map used for driving during cleaning, a SLAM (Simultaneous Localization and Mapping) map used for location recognition, an obstacle, and the like. It may be a learning map used for learning cleaning, a global location map used for global location recognition, an obstacle recognition map in which information about the recognized obstacle is recorded.
- SLAM Simultaneous Localization and Mapping
- maps may be stored and managed in the storage unit 150 for each use, but the map may not be clearly classified for each use.
- a plurality of pieces of information may be stored in one map to be used for at least two purposes.
- the obstacle information recognized on the learning map can be recorded to replace the obstacle recognition map
- the global location map can be used to recognize or replace the global location map by using the SLAM map used for location recognition. Can be.
- the controller 140 may include a driving control module 141, a position recognition module 142, a map generation module 143, and an obstacle recognition module 144.
- the driving control module 141 controls the driving of the mobile robots 100, 100a and 100b, and controls the driving of the driving unit 160 according to the driving setting.
- the driving control module 141 may determine the driving paths of the mobile robots 100, 100a and 100b based on the operation of the driving unit 160.
- the driving control module 141 may grasp the current or past moving speed of the mobile robot 100, the distance traveled, and the like based on the rotational speed of the driving wheel 136, and each driving wheel 136 ( L) and 136 (R) may also determine the current or past direction change process. Based on the driving information of the mobile robots 100, 100a and 100b thus identified, the positions of the mobile robots 100, 100a and 100b may be updated on the map.
- the map generation module 143 may generate a map of the driving zone.
- the map generation module 143 may generate a map by processing the image acquired through the image acquisition unit 120. That is, a cleaning map corresponding to the cleaning area can be created.
- the map generation module 143 may recognize the global location by processing the image acquired through the image acquisition unit 120 at each location in association with the map.
- the location recognition module 142 estimates and recognizes the current location.
- the position recognition module 142 detects the position in association with the map generation module 143 using the image information of the image acquisition unit 120, even when the position of the mobile robot 100, 100a, 100b is suddenly changed.
- the current position can be estimated and recognized.
- the mobile robots 100, 100a, and 100b may recognize the position during continuous driving through the position recognition module 142, and may also use the map generation module 143 and the obstacle recognition module 144 without the position recognition module 142. You can learn the map and estimate your current location.
- the image acquisition unit 120 acquires images around the mobile robot 100.
- an image acquired by the image acquisition unit 120 is defined as an 'acquisition image'.
- the acquired image includes various features such as lightings on the ceiling, edges, corners, blobs, and ridges.
- the map generation module 143 detects a feature from each of the acquired images.
- Various methods of detecting a feature from an image are well known in the field of computer vision technology.
- Several feature detectors are known that are suitable for the detection of these features. Examples include Canny, Sobel, Harris & Stephens / Plessey, SUSAN, Shi & Tomasi, Level curve curvature, FAST, Laplacian of Gaussian, Difference of Gaussians, Determinant of Hessian, MSER, PCBR, and Gray-level blobs detectors.
- the map generation module 143 calculates a descriptor based on each feature point.
- the map generation module 143 may convert feature points into descriptors using a scale invariant feature transform (SIFT) technique for feature detection.
- SIFT scale invariant feature transform
- the descriptor is defined as a cluster of individual feature points existing in a specific space and can be expressed as an n-dimensional vector. For example, various features such as edges, corners, blobs, and ridges of the ceiling may be calculated as respective descriptors and stored in the storage unit 150.
- At least one descriptor is classified into a plurality of groups according to a predetermined sub-classification rule for each acquired image, and the descriptors included in the same group are sub-represented according to the predetermined sub-representation rule. Can be converted to a descriptor. That is, the representative values of the descriptors obtained through the individual images may be designated and normalized.
- the SIFT can detect an invariant feature with respect to the scale, rotation, and brightness change of the photographing target, and thus the same area is not changed even when the mobile robot 100 is photographed with different postures.
- -invariant)) feature can be detected.
- HOG Histogram of Oriented Gradient
- Haar feature Haar feature
- Fems Fems
- LBP Local Binary Pattern
- MCT Modified Census Transform
- the map generation module 143 classifies at least one descriptor into a plurality of groups according to a predetermined sub-classification rule for each acquired image based on the descriptor information obtained through the acquired image of each position, and according to the predetermined sub-representation rule. Descriptors included in each can be converted into lower representative descriptors.
- all descriptors gathered from the acquired images in a predetermined zone are classified into a plurality of groups according to a predetermined sub-classification rule, and the descriptors included in the same group according to the predetermined sub-representation rule are each lower representative descriptors. You can also convert to.
- the map generation module 143 may obtain a feature distribution of each location through the above process.
- Each positional feature distribution can be represented by a histogram or an n-dimensional vector.
- the map generation module 143 may estimate an unknown current position based on a descriptor calculated from each feature point without passing through a predetermined sub classification rule and a predetermined sub representative rule.
- the current position of the mobile robot 100, 100a, 100b is unknown due to a position leap or the like, the current position may be estimated based on data stored in a pre-stored descriptor or a lower representative descriptor.
- the mobile robots 100, 100a, and 100b acquire an acquired image through the image acquisition unit 120 at an unknown current position. Through the image, various features such as lightings on the ceiling, edges, corners, blobs, and ridges are identified.
- the position recognition module 142 detects features from the acquired image. Description of the various methods of detecting features from an image in the field of computer vision technology and the various feature detectors suitable for the detection of these features are described above.
- the position recognition module 142 calculates a recognition descriptor through a recognition descriptor calculating step based on each recognition feature point.
- the recognition feature points and the recognition descriptors are for explaining the process performed by the obstacle recognition module 144 and are distinguished from terms describing the process performed by the map generation module 143.
- the features of the external world of the mobile robots 100, 100a and 100b are merely defined in different terms.
- the position recognition module 142 may convert a recognition feature point into a recognition descriptor by using a scale invariant feature transform (SIFT) technique for detecting the present feature.
- SIFT scale invariant feature transform
- the recognition descriptor may be expressed as an n-dimensional vector.
- SIFT selects a feature point that can be easily identified, such as a corner point, in an acquired image, and then distributes the gradient of brightness gradients of pixels belonging to a predetermined area around each feature point (the direction of the change of brightness and the degree of change of brightness). ) Is an image recognition technique that obtains an n-dimensional vector whose value is a numerical value for each dimension.
- the position recognition module 142 is based on at least one recognition descriptor information obtained through the acquired image of the unknown current position, position information to be compared according to a predetermined lower conversion rule (for example, feature distribution of each position) And information that can be compared with (sub-recognition feature distribution).
- each position feature distribution may be compared with each recognition feature distribution to calculate each similarity. Similarity (probability) may be calculated for each location corresponding to each location, and a location where the greatest probability is calculated may be determined as the current location.
- the controller 140 may distinguish the driving zone and generate a map composed of a plurality of regions, or recognize the current position of the main body 110 based on the pre-stored map.
- the controller 140 may transmit the generated map to an external terminal, a server, etc. through the communication unit 190.
- the controller 140 may store the map in the storage unit.
- the controller 140 transmits the updated information to the external terminal so that the map stored in the external terminal and the mobile robot 100, 100a, 100b is the same.
- the mobile robot 100, 100a and 100b may clean the designated area for the cleaning instruction from the mobile terminal. This is to display the current position of the mobile robot on the terminal.
- the map may be divided into a plurality of areas, the connection path connecting the plurality of areas, and may include information about obstacles in the area.
- the controller 140 determines whether the current position of the mobile robot matches the position on the map.
- the cleaning command may be input from a remote controller, an operation unit or an external terminal.
- the controller 140 recognizes the current position and recovers the current position of the mobile robot 100 based on the current position.
- the driving unit 160 may be controlled to move to the designated area.
- the position recognition module 142 analyzes the acquired image input from the image acquisition unit 120 to estimate the current position based on the map. can do.
- the obstacle recognition module 144 or the map generation module 143 may also recognize the current position in the same manner.
- the driving control module 141 After recognizing the position and restoring the current position of the mobile robot (100, 100a, 100b), the driving control module 141 calculates the driving route from the current position to the designated area and controls the driving unit 160 to the designated area. Move.
- the driving control module 141 may divide the entire driving zone into a plurality of areas according to the received cleaning pattern information, and set at least one area as a designated area.
- the driving control module 141 may calculate a driving route according to the received cleaning pattern information, travel along the driving route, and perform cleaning.
- the controller 140 may store the cleaning record in the storage 150 when the cleaning of the set designated area is completed.
- controller 140 may transmit the operation state or cleaning state of the mobile robot 100 to the external terminal and the server at a predetermined cycle through the communication unit 190.
- the external terminal displays the position of the mobile robot along with the map on the screen of the running application based on the received data, and outputs information on the cleaning state.
- the mobile robot 100, 100a, 100b moves until one obstacle or a wall surface is detected in one direction, and when the obstacle recognition module 144 recognizes the obstacle, the robot moves straight according to the properties of the recognized obstacle. You can determine your driving pattern, for example, rotation.
- the mobile robot 100, 100a, and 100b may continue to go straight if the recognized obstacle is an obstacle of a kind that can be overcome. Or, if the property of the recognized obstacle is an obstacle that can not be passed, the mobile robot (100, 100a, 100b) is rotated to move a certain distance, and again to the distance to the obstacle is detected in the opposite direction of the initial movement direction to zigzag I can run in the form
- the mobile robots 100, 100a, and 100b may perform obstacle recognition and avoidance based on machine learning.
- the controller 140 may drive the driving unit 160 based on an obstacle recognition module 144 that recognizes an obstacle previously learned by machine learning in an input image and an attribute of the recognized obstacle. It may include a driving control module 141 for controlling.
- the mobile robots 100, 100a, and 100b may include an obstacle recognition module 144 in which an attribute of an obstacle is learned by machine learning.
- Machine learning means that a computer can learn from data and let the computer solve the problem without the human being instructing the logic.
- ANN Deep Learning Based on Artificial Neural Networks
- the artificial neural network may be implemented in a software form or a hardware form such as a chip.
- the obstacle recognition module 144 may include an artificial neural network (ANN) in the form of software or hardware in which the property of the obstacle is learned.
- ANN artificial neural network
- the obstacle recognition module 144 may include a deep neural network (DNN) such as a convolutional neural network (CNN), a recurrent neural network (RNN), a deep belief network (DBN), and the like that have been learned by deep learning. It may include.
- DNN deep neural network
- CNN convolutional neural network
- RNN recurrent neural network
- DNN deep belief network
- the obstacle recognition module 144 may determine an attribute of an obstacle included in the input image data based on weights between nodes included in the deep neural network DNN.
- the controller 140 based on the moving direction and the moving speed of the main body 110, the image acquisition unit ( The controller 120 may control to select a specific view image before the obstacle detection time of the sensor unit 170 from among the plurality of images acquired by the 120.
- the mobile robot When the image acquisition unit 120 acquires an image by using the obstacle detection time of the sensor unit 170 as a trigger signal, the mobile robot continues to move, so that the obstacle is not included in the acquired image or is small. Can be.
- the obstacle detection of the sensor unit 170 of the plurality of consecutive images obtained by the image acquisition unit 120 based on the moving direction and the moving speed of the main body 110, the obstacle detection of the sensor unit 170 of the plurality of consecutive images obtained by the image acquisition unit 120.
- a specific viewpoint image before the viewpoint may be selected and used as obstacle recognition data.
- the obstacle recognition module 144 may recognize the property of the obstacle included in the selected specific viewpoint image based on data previously learned by machine learning.
- the controller 140 may correspond to the direction of the obstacle detected by the sensor unit 170.
- the control unit 120 may control to extract a partial region of the image acquired by the 120.
- the image acquisition unit 120 in particular, the front camera 120a may acquire an image within a predetermined angle range in the moving direction of the mobile robot 100, 100a, 100b.
- the controller 140 may determine the property of an obstacle in the moving direction by using only a partial region instead of using the entire image acquired by the image acquisition unit 120, in particular, the front camera 120a.
- control unit 140 extracts a partial region of an image acquired by the image acquisition unit 120 in response to a direction of an obstacle detected by the sensor unit 170.
- the module 145 may further include.
- the mobile robot 100b extracts a partial region of the image acquired by the image acquisition unit 120 in response to the direction of the obstacle detected by the sensor unit 170. It may further include a separate image processing unit 125.
- the mobile robot 100a according to the embodiment of FIG. 6 and the mobile robot 100b according to the embodiment of FIG. 7 have the same configuration except for the image processing module 145 and the image processing unit 125.
- the image acquisition unit 120 may directly extract a partial region of the image.
- the obstacle recognition module 144 learned by the machine learning has a higher recognition rate as the object to be learned occupies a large portion of the input image data.
- the recognition rate may be increased by extracting another region of the image acquired by the image acquisition unit 120 as recognition data according to the direction of the obstacle detected by the sensor unit 170 such as an ultrasonic sensor.
- the obstacle recognition module 144 may recognize the obstacle on the basis of the data previously learned by machine learning from the extracted image.
- the driving control module 141 may control the driving of the driving unit 160 based on the recognized attribute of the obstacle.
- the controller 140 when the obstacle is detected in the right direction of the front of the main body, extracts the lower right region of the image obtained by the image acquisition unit, the obstacle is detected in the left direction of the front of the main body If the image acquisition unit is to extract the lower left area of the image, and if the obstacle is detected in the front direction of the main body, the control unit to extract the center (center) area of the image obtained by the image acquisition unit can do.
- controller 140 may control to extract and extract the region to be extracted from the image acquired by the image acquisition unit so as to correspond to the direction of the detected obstacle.
- the storage 150 may store input data for determining obstacle attributes and data for learning the deep neural network DNN.
- the storage 150 may store the original image acquired by the image acquisition unit 120 and the extracted images from which the predetermined region is extracted.
- the storage unit 150 may store weights and biases that form the deep neural network (DNN).
- DNN deep neural network
- weights and biases constituting the deep neural network structure may be stored in an embedded memory of the obstacle recognition module 144.
- the obstacle recognition module 144 performs a learning process by using the extracted image as training data whenever a partial region of the image acquired by the image acquisition unit 120 is extracted, or a predetermined number of times. After the extracted image is obtained, a learning process may be performed.
- the obstacle recognition module 144 updates a deep neural network (DNN) structure such as a weight by adding a recognition result every time an obstacle is recognized, or is secured after a predetermined number of training data are secured.
- a training process may be performed with training data to update a deep neural network (DNN) structure such as a weight.
- the mobile robot 100, 100a, 100b transmits the original image or the extracted image acquired by the image acquisition unit 120 to a predetermined server through the communication unit 190, and data related to machine learning from the predetermined server. Can be received.
- the mobile robots 100, 100a, and 100b may update the obstacle recognition module 141 based on data related to machine learning received from the predetermined server.
- FIG. 8 is an example of a simplified internal block diagram of a server according to an embodiment of the present invention.
- the server 70 may include a communication unit 820, a storage unit 830, a learning module 840, and a processor 810.
- the processor 810 may control overall operations of the server 70.
- the server 70 may be a server operated by a home appliance manufacturer such as the mobile robots 100, 100a, 100b, or a server operated by a service provider, or may be a kind of cloud server.
- the communication unit 820 may receive various data such as status information, operation information, operation information, and the like from a portable terminal, a home appliance such as the mobile robots 100, 100a, 100b, a gateway, or the like.
- the communication unit 820 may transmit data corresponding to the received various information to a home appliance, a gateway, etc., such as the mobile terminal, the mobile robots 100, 100a, and 100b.
- the communication unit 820 may include one or more communication modules, such as an internet module and a mobile communication module.
- the storage unit 830 may store the received information and include data for generating result information corresponding thereto.
- the storage unit 830 may store data used for machine learning, result data, and the like.
- the learning module 840 may serve as a learner of a home appliance such as the mobile robots 100, 100a, and 100b.
- the learning module 840 may include an artificial neural network, for example, a deep neural network (DNN) such as a convolutional neural network (CNN), a recurrent neural network (RNN), and a deep belief network (DBN). You can learn neural networks.
- DNN deep neural network
- CNN convolutional neural network
- RNN recurrent neural network
- DBN deep belief network
- both unsupervised learning and supervised learning may be used.
- control unit 810 may control to update the artificial neural network structure of the home appliance, such as the mobile robot (100, 100a, 100b) to the learned artificial neural network structure after learning according to the setting.
- the control unit 810 may control to update the artificial neural network structure of the home appliance, such as the mobile robot (100, 100a, 100b) to the learned artificial neural network structure after learning according to the setting.
- 9 to 12 are diagrams referred to for describing deep learning.
- Deep learning technology a kind of machine learning, is a multi-level deep learning based on data.
- Deep learning may represent a set of machine learning algorithms that extract key data from a plurality of data as the level increases.
- the deep learning structure may include an artificial neural network (ANN).
- ANN artificial neural network
- the deep learning structure may include a deep neural network (DNN) such as a convolutional neural network (CNN), a recurrent neural network (RNN), and a deep belief network (DBN).
- DNN deep neural network
- CNN convolutional neural network
- RNN recurrent neural network
- DNN deep belief network
- an artificial neural network may include an input layer, a hidden layer, and an output layer. Each layer includes a plurality of nodes, and each layer is connected to the next layer. Nodes between adjacent layers may be connected to each other with a weight.
- a computer finds a certain pattern from input data 1010 and forms a feature map.
- the computer may extract the low level feature 1020, the middle level feature 1030, and the high level feature 1040 to recognize the object and output the result 1050.
- the neural network can be abstracted to higher level features as it goes to the next layer.
- each node may operate based on an activation model, and an output value corresponding to an input value may be determined according to the activation model.
- the output value of any node, eg, lower level feature 1020, may be input to the next layer connected to that node, eg, the node of mid level feature 1030.
- a node of a next layer for example, a node of the midlevel feature 1030, may receive values output from a plurality of nodes of the low level feature 1020.
- the input value of each node may be a value to which a weight is applied to the output value of the node of the previous layer.
- the weight may mean the connection strength between nodes.
- Deep learning can also be seen as a process of finding an appropriate weight.
- an output value of an arbitrary node may be input to the next layer connected to the node, for example, a node of the higher level feature 1040.
- a node of the next layer for example, a node of the high level feature 1040 may receive values output from a plurality of nodes of the mid level feature 1030.
- the artificial neural network may extract feature information corresponding to each level by using a learned layer corresponding to each level.
- the artificial neural network can be abstracted sequentially to recognize a predetermined object by utilizing feature information of the highest level.
- the computer distinguishes bright and dark pixels according to the brightness of pixels from the input image, and distinguishes simple shapes such as edges and edges, I can distinguish things. Finally, the computer can figure out the form that defines the human face.
- the deep learning structure according to the present invention may use various known structures.
- the deep learning structure according to the present invention may be a convolutional neural network (CNN), a recurrent neural network (RNN), a deep belief network (DBN), or the like.
- CNN convolutional neural network
- RNN recurrent neural network
- DNN deep belief network
- RNN Recurrent Neural Network
- RNN Recurrent Neural Network
- Deep Belief Network is a deep learning structure that consists of multiple layers of Restricted Boltzman Machine (RBM), a deep learning technique. By repeating RBM (Restricted Boltzman Machine) learning, if a certain number of layers is formed, a deep belief network (DBN) having a corresponding number of layers can be formed.
- RBM Restricted Boltzman Machine
- CNN Convolutional Neural Network
- CNN Convolutional Neural Network
- FIG. 11 is a diagram illustrating a convolutional neural network (CNN) structure.
- CNN convolutional neural network
- a convolutional neural network may also include an input layer, a hidden layer, and an output layer.
- the predetermined image 1100 is input to an input layer.
- a hidden layer may include a plurality of layers, and may include a convolution layer and a sub-sampling layer.
- Convolutional Neural Network basically includes various filters for extracting the features of the image through convolution operations and pooling or non-linear activation functions for adding nonlinear features. Used.
- Convolution is mainly used for filter operation in the field of image processing and is used to implement a filter for extracting features from an image.
- a result value 1201 is obtained.
- a predetermined result value 1202 is obtained.
- the convolution layer may be used to perform convolution filtering to filter information extracted from the previous layer using a filter having a predetermined size (eg, the 3X3 window illustrated in FIG. 12).
- the convolution layer performs a convolution operation on the input image data 1100 and 1102 using a convolution filter, and generates feature maps 1101 and 1103 in which the characteristics of the input image 1100 are expressed. do.
- filtering images as many as the number of filters may be generated according to the number of filters included in the convolution layer.
- the convolutional layer may be composed of nodes included in the filtered images.
- the sub-sampling layer paired with the convolution layer may include the same number of feature maps as the paired convolution layer.
- the sub-sampling layer reduces the dimensions of the feature maps 1101 and 1103 through sampling or pooling.
- the output layer recognizes the input image 1100 by combining various features represented in the feature map 1104.
- the obstacle recognition module of the mobile robot according to the present invention may use the various deep learning structures described above.
- a convolutional neural network (CNN) structure that is widely used for object recognition in an image may be used.
- the learning of the artificial neural network can be achieved by adjusting the weight of the connection line between nodes so that a desired output comes out for a given input.
- the neural network can continuously update the weight value by learning.
- a method such as back propagation may be used for learning an artificial neural network.
- FIG. 13 and 14 are views referred to for explaining the obstacle recognition of the obstacle recognition module 144.
- the obstacle recognition module 144 may classify and classify obstacles into classes such as a fan, a home theater, a power strip, a lamp support, a human hair, a barrier, and the like.
- the obstacle recognition module 144 a class such as a fan, a home theater, a power strip, a lamp support, a human hair, and the like may be classified as a dangerous obstacle super-class and classified and recognized.
- the obstacle recognition module 144 may classify, classify, and recognize a obstacle capable of traveling straight ahead such as a barrier into a non-hazardous obstacle super-class.
- the obstacle recognition module 144 recognizes an input image, and a fan may obtain a recognition result having a confidence value of 0.95 and a home theater having a confidence value of 0.7. . In this case, the obstacle recognition module 144 may output a fan, which is a recognition result having a higher confidence value, as a recognition result for the input image.
- the confidence value may be normalized to a range of 0.0 to 1.0.
- the obstacle recognition module 144 recognizes the input image, so that the fan has a confidence value of 0.35 and the home theater has a confidence value of 0.4.
- the obstacle recognition module 144 may determine the unknown data without selecting a specific recognition result because both confidence values of the two recognition results are lower than the reference value. have.
- the obstacle recognition module 144 may recognize the input image, and may obtain a recognition result in which the fan has a confidence value of 0.95 and the home theater has a confidence value of 0.9.
- the obstacle recognition module 144 does not select a specific recognition result because the confidence values of both recognition results are higher than the reference value. In other words, it can be judged as a dangerous obstacle that is a higher concept.
- the driving control module 141 may control the drive unit 160 to move to avoid the dangerous obstacle.
- 15 is a flowchart illustrating a control method of a mobile robot according to an embodiment of the present invention.
- the mobile robot (100, 100a, 100b) can move according to the command or setting to perform the cleaning (S1510).
- the image acquisition unit 120 may acquire a plurality of images by continuously photographing the periphery of the main body 110 while moving (S1520).
- the plurality of consecutive images acquired by the image acquisition unit 120 may be stored in the storage unit 150.
- the controller 140 When an obstacle is detected through the sensor unit 170 during the movement (S1530), the controller 140 based on the moving direction and the moving speed of the main body 110, the sensor unit 170 among the plurality of consecutive images. A specific viewpoint image before the obstacle detection timing may be selected (S1540).
- the movement direction and the movement speed may be calculated by the driving control module 141 or the like based on the output of the motion detection sensor of the sensing unit 170.
- the moving speed is constant, so that the moving direction of the main body 110 can be determined and an image of a specific viewpoint can be selected.
- control unit 140 when the movement direction is a straight run or a rotational travel less than a predetermined reference value (reference angle), the specific time point before the obstacle detection time of the sensor unit 170 of the plurality of consecutive images You can select an image.
- the shooting range of the camera is larger than the sensing range of a sensor such as an ultrasonic sensor.
- the sensor unit 170 acquires an image as a signal for detecting an obstacle
- the mobile robot continues to travel, and thus the acquired image may not include the characteristic of the obstacle.
- the sensing range of the sensor unit 170 is short, the case where the acquired image does not include the characteristic of the obstacle may occur with a higher probability.
- the controller 140 may select a specific viewpoint image before the obstacle detection time of the sensor unit 170 by reflecting the moving direction and the speed when driving straight or driving close to the straight driving.
- the controller 140 may select an image of a past viewpoint based on an obstacle detection timing of the sensor unit 170.
- the mobile robot 100 travels a longer distance after the obstacle detection time of the sensor unit 170.
- the image acquisition unit 120 captures and acquires a plurality of images at a constant speed
- the mobile robot maintains a longer distance between the time when the image of a specific frame is captured and the time when the image of the next frame is captured. Drive.
- the faster the moving speed the higher the area of the obstacle occupying the image is to select the image closer to the obstacle detection time of the sensor unit 170.
- the image acquisition unit 120 acquires more images while driving at the same distance as the movement speed is slower, it is preferable to select an image of a past viewpoint based on the obstacle detection time of the sensor unit 170. can do.
- the obstacle recognition module 144 may select the specific view image and use it as input data of obstacle recognition.
- the obstacle recognition module 144 learned by the machine learning has a high recognition rate as the object to be learned occupies a large portion of the input image data.
- the controller 140 may control to cut and extract a partial region of the selected specific view image in response to the direction of the obstacle detected by the sensor unit 170 (S1550).
- the image acquisition unit 120 extracts another region from the acquired image and uses it as recognition data, thereby increasing the recognition rate.
- the obstacle recognition module 144 may recognize the property of the obstacle included in the image of the partial region extracted from the selected specific view image (S1560).
- the obstacle recognition module 144 may include an artificial neural network trained to recognize an attribute such as an obstacle type by machine learning, and extracted from the selected specific viewpoint image based on data previously learned by machine learning. An attribute of an obstacle included in an image of a partial region may be recognized (S1560).
- the obstacle recognition module 144 includes a convolutional neural network (CNN), which is one of deep learning structures, and the previously learned convolutional neural network (CNN) recognizes an attribute of an obstacle included in input data, and as a result, You can output
- CNN convolutional neural network
- CNN previously learned convolutional neural network
- the recognition accuracy may be further improved by performing the aforementioned obstacle recognition process several times and deriving a final recognition result based on the plurality of recognition results.
- the controller 140 stores the detected position information of the obstacle and the position information of the mobile robot in the storage unit 150 and maps an area having a predetermined size based on the detected position of the obstacle. ) Can be controlled to register as an obstacle area.
- the property of the obstacle may be recognized, and the final obstacle property may be determined based on the plurality of recognition results.
- the controller 140 sequentially recognizes the attributes of the obstacles with respect to the images acquired through the image acquisition unit 120 in a predetermined obstacle area, and finally determines the last of the obstacles based on the plurality of sequentially recognized recognition results. You can determine the attributes. This embodiment will be described later in detail with reference to FIGS. 29 to 34.
- the driving control module 141 may control the driving of the driving unit 160 based on the recognized attribute of the obstacle (S1570).
- the driving control module 141 may control the driving to bypass the obstacle.
- the driving control module 141 may control to continue running straight in the case of the obstacle of the height that the recognized obstacle can be over, such as a low height of the barrier.
- the driving control module 141 may be controlled to bypass the obstacle when the obstacle that is likely to be restrained when moving even a low height obstacle such as a pedestal, a human hair, a power strip, and an electric wire of the fan.
- 16 is a flowchart illustrating a control method of a mobile robot according to an embodiment of the present invention.
- the mobile robot 100, 100a, 100b may move according to a command or a setting (S1610).
- the obstacle may be recognized by sensing the reflected ultrasonic signal (S1620).
- the image acquisition unit 120 may continuously acquire the plurality of images by photographing the front and the surroundings of the mobile robot (100, 100a, 100b).
- the controller 140 may select a specific viewpoint image of a past viewpoint in consideration of a moving direction and a speed from among a plurality of images acquired through the image acquisition unit 120, and based on data previously learned by machine learning, The property of the obstacle detected in the selected specific view image may be determined.
- controller 140 may determine whether the detected obstacle is at a height that may be exceeded (S1630).
- the controller 140 may control to bypass the obstacle and rotate by 90 degrees (S1655).
- the fisherman 140 may determine the attribute information of the detected obstacle (S1640). That is, the controller 140 may determine whether the recognized obstacle is an obstacle that may proceed due to the low possibility of restraint.
- the controller 140 may control to continue to move straight (S1650).
- the detected obstacle is a height that can be exceeded, in the case of an obstacle of a low height, driving straight.
- the mobile robot tries to escape from the restrained state by applying a motion such as shaking left / right when it is constrained, but a safety accident such as peeling off the coating of the wire may occur.
- the present invention may improve reliability by recognizing obstacle property information by using machine learning and image information and determining a driving pattern according to the recognized obstacle property.
- 17 to 25 are views referred to for describing a control method of a mobile robot according to an embodiment of the present invention.
- FIG 17 illustrates a case where the obstacle 1700 is detected in the front direction of the mobile robot 100.
- the image acquisition unit 120 may continuously photograph and acquire a plurality of images.
- the mobile robot 100 obtains a first image 1711 at a first position 1710, obtains a second image 1721 at a second position 1720, and a third image at a third position 1730. (1731) can be obtained.
- a predetermined number of images may be stored in the storage 150.
- the image of the earliest time point may be deleted, and the newly acquired image may be stored.
- the mobile robot 100 may start image recognition according to a trigger signal generated by ultrasonic signal detection.
- an ultrasonic sensor used as a trigger signal has a short range, the characteristics of the object to be recognized may disappear when the image 1731 obtained when the obstacle 1700 is detected and triggered. have.
- the present invention stores the successive images in the storage unit 150, and after determining whether the driving direction is a straight driving, using the first image 1711 without using the third image 1731 when triggering. Obstacle recognition can be performed.
- the mobile robot 100 Since the mobile robot 100 often travels at a constant speed, only the straight driving is determined to select an image of a predetermined time point, for example, an image of a past time point two frames ahead of an obstacle detection time of the sensor unit 170. Obstacle recognition can be performed.
- controller 140 may determine how far ahead of an obstacle detection time of the sensor unit 170 is to be selected in consideration of the detection range, performance, and processing speed of the obstacle recognition process of the sensor unit 170. .
- the recognition rate may be increased by extracting and recognizing a partial region of the image of the selected viewpoint without using the entire image of the selected viewpoint as obstacle recognition input data.
- the center, left, and right regions of the image may be extracted based on the direction in which the obstacle is detected, rather than simply cropping to a predetermined size based on the center region of the entire image.
- the lower right area of the selected specific view image is extracted, and when the obstacle is detected in the left direction of the front of the main body, the selected specific The lower left region of the viewpoint image may be extracted, and the center region of the selected specific viewpoint image may be extracted when the obstacle is detected in the front direction of the main body.
- the machine recognizes the largest specific gravity in the image, it can improve the attribute recognition rate of the obstacle.
- the sensor unit 170 may include a first sensor S1 and a first sensor S1 disposed on a front surface of a main body of the mobile robot 100. It may include a second sensor (S2) and a third sensor (S3) arranged to be spaced apart from the left and right.
- the first sensor S1 may act as a transmitter and the second sensor S2 and the third sensor S3 may act as a receiver.
- the first sensor S1 may emit an ultrasonic signal
- the second sensor S2 and the third sensor S3 may receive a signal reflected by an obstacle.
- the direction in which the obstacle is located and the distance to the obstacle may be determined using known ultrasound recognition methods.
- FIG. 19A illustrates a case where the obstacle X1 is detected at the center of the front direction of the mobile robot 100.
- the predetermined area 1910 having the size of a2Xb2 is formed at the bottom of the center of the entire original image 1900 obtained by the image acquisition unit 120 having the size of a1Xb1. Can be extracted.
- 20 to 24 illustrate the case where the obstacle is recognized from the side.
- the image acquisition unit 120 may continuously photograph and acquire a plurality of images.
- the mobile robot 100 obtains a first image 2110 at a first position 2010, obtains a second image 2120 at a second position 2020, and a third image at a third position 2030. 2130 may be obtained.
- a predetermined number of images may be stored in the storage 150.
- the image of the earliest time point may be deleted, and the newly acquired image may be stored.
- the first image 2110 at a time earlier than the third image 2130 at the trigger time is not used. Obstacle recognition can be performed using.
- the right region in the first image 2110 may be cut out and extracted to recognize an obstacle.
- FIG. 23A illustrates a case where an obstacle X1 is detected from the front right direction of the mobile robot 100
- FIG. 24A illustrates an obstacle X1 of the front left direction of the mobile robot 100.
- An example is detected in the following.
- the controller 140 may detect the extraction target region in the image acquired by the image acquisition unit 120, and detect the distance l1 between the detected obstacle X1 and the second sensor S2 and the detection. It may be controlled to shift and extract in proportion to the difference of the distance l2 between the obstacle X1 and the third sensor S3.
- the distance l1 between the detected obstacle X1 and the second sensor S2 is a distance between the detected obstacle X1 and the third sensor S3. If greater than (l2) (l1> l2), it may be determined that the obstacle (X1) is detected in the front right direction of the mobile robot 100.
- the predetermined region 1920 having the size of a2Xb2 may be extracted from the lower right end of the entire original image 1900 obtained by the image acquisition unit 120 having the size of a1Xb1. have.
- the predetermined value d1 to be shifted is the distance l1 between the detected obstacle X1 and the second sensor S2, the detected obstacle X1 and the third sensor ( The distance l2 between S3 may be proportional to the difference l1-l2.
- the distance l1 between the detected obstacle X1 and the second sensor S2 is a distance between the detected obstacle X1 and the third sensor S3. If less than l2 (l1 ⁇ l2), it may be determined that the obstacle X1 is detected in the front left direction of the mobile robot 100.
- the predetermined area 1930 having the size of a2Xb2 may be extracted from the lower left of the original image 1900 obtained by the image acquisition unit 120 having the size of a1Xb1. have.
- the predetermined value d2 to be shifted is the distance l1 between the detected obstacle X1 and the second sensor S2, the detected obstacle X1 and the third sensor ( The distance l2 between S3 may be proportional to the difference l2-l1.
- a specific viewpoint image before the obstacle detection time of the sensor unit 170 may be selected from the plurality of consecutive images.
- the mobile robot 100 may move while rotating in the order of the first position 2510, the second position 2520, and the third position 2530. In this case, if the mobile robot 100 rotates below a predetermined threshold, obstacle recognition may be performed in the same manner as driving straight.
- the obstacle recognition may be performed in the same manner as the straight driving.
- the viewing range 2501 may refer to a range in which the image acquisition unit 120 may acquire an image.
- the mobile robot by performing a learning process using the image of the selected specific view or the image extracted from the region of the selected specific view as a training (training) data, Neural network (ANN), deep neural network (DNN) structure can be continuously updated.
- ANN Neural network
- DNN deep neural network
- an image obtained by extracting a region of the selected specific view or the image of the selected specific view may be transmitted to a predetermined server, and may receive data related to machine learning from the predetermined server. Thereafter, the mobile robot may update the obstacle recognition module 144 based on data related to machine learning received from the predetermined server.
- FIG. 26 is a view referred to for describing a method of operating a mobile robot and a server according to an embodiment of the present invention
- FIG. 27 is a flowchart illustrating a method of operating a mobile robot and a server according to an embodiment of the present invention.
- the obstacle recognition module 144 of the mobile robot 100 may be equipped with a deep neural network (DNN) structure 144a such as a convolutional neural network (CNN).
- DNN deep neural network
- CNN convolutional neural network
- the pre-learned deep neural network (DNN) structure 144a may receive input data for recognition (S2710), recognize an attribute of an obstacle included in the input data (S2720), and output the result (S2730).
- Unknown data that is not recognized by the deep neural network (DNN) structure 144a may be stored in its storage space 144b in the storage 150 or the obstacle recognition module 144 (S2740).
- unknown data not recognized by the obstacle recognition module 144 may be transmitted to the server 70 through the communication unit 190 (S2741).
- the data that the obstacle recognition module 144 has successfully recognized may be transmitted to the server 70.
- the server 70 may generate a configuration of learned weights, and the server 70 may learn a deep neural network (DNN) structure using training data.
- DNN deep neural network
- the server 70 may transmit updated deep neural network (DNN) structure data to the mobile robot 100 to update it (S2743). ).
- FIG. 28 is a flowchart illustrating a control method of a mobile robot according to an embodiment of the present invention.
- the mobile robots 100, 100a, and 100b may move according to a command or a setting and perform cleaning (S2810).
- the image acquisition unit 120 may acquire a plurality of images by continuously photographing the periphery of the main body 110 while moving (S2820).
- the plurality of consecutive images acquired by the image acquisition unit 120 may be stored in the storage unit 150.
- the controller 140 When an obstacle is detected through the sensor unit 170 during the movement (S2830), the controller 140 based on the moving direction and the moving speed of the main body 110, the sensor unit 170 among the plurality of consecutive images. A specific viewpoint image before the obstacle detection timing may be selected (S2840).
- the movement direction and the movement speed may be calculated by the driving control module 141 or the like based on the output of the motion detection sensor of the sensing unit 170.
- the moving speed is constant, so that the moving direction of the main body 110 can be determined and an image of a specific viewpoint can be selected.
- control unit 140 when the movement direction is a straight run or a rotational travel less than a predetermined reference value (reference angle), the specific time point before the obstacle detection time of the sensor unit 170 of the plurality of consecutive images You can select an image.
- the mobile robot When the sensor unit 170 acquires an image as a signal for detecting an obstacle, the mobile robot continues to travel, and thus the acquired image may not include the characteristic of the obstacle. In addition, when the sensing range of the sensor unit 170 is short, the case where the acquired image does not include the characteristic of the obstacle may occur with a higher probability.
- the controller 140 may select a specific viewpoint image before the obstacle detection time of the sensor unit 170 by reflecting the moving direction and the speed when driving straight or driving close to the straight driving.
- the controller 140 may select an image of a past viewpoint based on an obstacle detection timing of the sensor unit 170.
- the mobile robot 100 travels a longer distance after the obstacle detection time of the sensor unit 170.
- the image acquisition unit 120 captures and acquires a plurality of images at a constant speed
- the mobile robot maintains a longer distance between the time when the image of a specific frame is captured and the time when the image of the next frame is captured. Drive.
- the faster the moving speed the higher the area of the obstacle occupying the image is to select the image closer to the obstacle detection time of the sensor unit 170.
- the image acquisition unit 120 acquires more images while driving at the same distance as the movement speed is slower, it is preferable to select an image of a past viewpoint based on the obstacle detection time of the sensor unit 170. can do.
- the obstacle recognition module 144 may select the specific view image and use it as input data of obstacle recognition.
- the obstacle recognition module 144 may recognize an attribute of an obstacle included in the selected specific viewpoint image based on data previously learned by machine learning (S2850).
- the obstacle recognition module 144 may include an artificial neural network learned to recognize an attribute such as an obstacle type by machine learning, and may recognize an attribute of an obstacle based on previously learned data (S2850).
- the obstacle recognition module 144 includes a convolutional neural network (CNN), which is one of deep learning structures, and the previously learned convolutional neural network (CNN) recognizes an attribute of an obstacle included in input data, and as a result, You can output
- CNN convolutional neural network
- CNN previously learned convolutional neural network
- the driving control module 141 may control the driving of the driving unit 160 on the basis of the recognized attribute of the obstacle (S2860).
- the driving control module 141 may control the driving to bypass the obstacle.
- the driving control module 141 may control to continue running straight in the case of the obstacle of the height that the recognized obstacle can be over, such as a low height of the barrier.
- the driving control module 141 may be controlled to bypass the obstacle when the obstacle that is likely to be restrained when moving even a low height obstacle such as a pedestal, a human hair, a power strip, and an electric wire of the fan.
- 29 is a flowchart illustrating a control method of a mobile robot according to an embodiment of the present invention.
- the mobile robots 100, 100a, and 100b may move according to a command or a setting and perform cleaning (S2910).
- the sensor unit 170 may include an obstacle detecting sensor 131 and may detect an obstacle within a predetermined range in front of the moving direction (S2920).
- the position information of the detected obstacle and the position information of the mobile robot 100, 100a, 100b may be stored in the storage unit 150 (S2930). .
- an area having a predetermined size based on the detected position of the obstacle may be registered and stored as an obstacle area in a map stored in the storage unit 150 (S2930).
- the position information of the mobile robot may be calculated from odometry information, and the odometry information may be configured based on the data detected by the motion detection sensor of the sensor unit 170 described above. Can be.
- the image acquisition unit 120 may acquire a plurality of images in the obstacle area during the movement (S2940).
- the image acquisition unit 120 may register the obstacle area and acquire a plurality of images by continuously photographing in the obstacle area before the mobile robot 100, 100a, 100b leaves the obstacle area. .
- the mobile robot (100, 100a, 100b) can obtain the image of the entire obstacle area while performing the avoidance driving for the detected obstacle.
- photographing the obstacle area while traveling in the obstacle area by a series of operations can bring a significant change in the travel path of the mobile robot (100, 100a, 100b).
- a plurality of images may be obtained by dividing a predetermined number of images by photographing while passing through an area, and taking a predetermined number of images by photographing while passing through an obstacle area again when driving back to the opposite direction.
- the obstacle recognition module 144 based on the data pre-learned by machine learning, moving the obstacle area while sequentially obtaining the obstacles with respect to the images obtained through the image acquisition unit 120 The property may be recognized (S2950).
- the obstacle recognition module 144 may include an artificial neural network learned to recognize an attribute such as an obstacle type by machine learning, and may recognize an attribute of an obstacle based on previously learned data (S2950).
- the obstacle recognition module 144 is equipped with a convolutional neural network (CNN), which is one of the deep learning structures, and the previously learned convolutional neural network (CNN) recognizes the property of the obstacle included in the input data, and as a result, You can output
- CNN convolutional neural network
- CNN previously learned convolutional neural network
- the obstacle recognition module 144 performs obstacle recognition on one image acquired in the obstacle area and performs obstacle recognition on another image, and then selects a more accurate recognition result from two recognition results of the two images. Can be stored.
- the obstacle recognition module 144 may select a more accurate recognition result among the recognition results by comparing confidence values included in the two recognition results.
- the last recognition result performed may be the current obstacle recognition result, and when obstacle recognition is performed on the next image, the current obstacle recognition result becomes a previous obstacle recognition result, and the obstacle recognition result for the next image. May be the result of the current obstacle recognition.
- the obstacle recognition module 144 sequentially recognizes the property of the obstacle with respect to the images acquired through the image acquisition unit 120 (S2950), the current obstacle recognition result is compared with the previous obstacle recognition result. Can be done repeatedly.
- an image may be obtained by photographing the obstacle area.
- the obstacle recognition module 144 may sequentially perform the image recognition process on the obtained images and then select a recognition result having a higher confidence value compared with the previous obstacle recognition result.
- the obstacle recognition module 144 compares a current obstacle recognition result with a previous obstacle recognition result, and maintains the current obstacle recognition result if the current obstacle recognition result is the same as the previous obstacle recognition result.
- the confidence value may be raised by reflecting a predetermined weight.
- the confidence value (confidence) corresponding to the recognition result reflects a predetermined weight while maintaining the result. It can be raised.
- the obstacle recognition module 144 may control to increase and store the confidence value by reflecting a predetermined weight.
- the obstacle recognition module 144 may recognize a recognition result having a high confidence value among the current obstacle recognition result and the previous obstacle recognition result. You can register with the new current obstacle recognition result.
- the obstacle recognition module 144 may compare the two recognition results, and may leave a recognition result having a higher confidence value and compare the result with a subsequent recognition result.
- the obstacle recognition module 144 may determine the final attribute of the obstacle based on the plurality of sequentially recognized recognition results (S2960).
- the obstacle recognition module 144 may sequentially recognize the obstacle and determine the final property of the obstacle when a predetermined criterion is satisfied. That is, the obstacle recognition module 144 may output the current recognition result as the final recognition result when the predetermined criterion is satisfied.
- the current recognition result at the moment when the predetermined criterion is satisfied may be the last current recognition result.
- the final current recognition result may be determined as the final attribute of the obstacle when the obstacle recognition is completed for all areas of the obstacle area, that is, when the recognition coverage of the obstacle area is 100%. Can be.
- the obstacle recognition process may be set to end.
- the obstacle recognition module 144 compares a current obstacle recognition result with a previous obstacle recognition result, and if the current obstacle recognition result is the same as the previous obstacle recognition result, the current obstacle recognition result is determined.
- the confidence value may be changed to an average value of a confidence value of the current obstacle recognition result and a confidence value of the previous obstacle recognition result.
- the confidence value for the same recognition result is determined as an average value of existing confidence values of the two recognition results.
- the obstacle recognition module 144 if the current obstacle recognition result and the previous obstacle recognition result is not the same, the confidence value of the current obstacle recognition result and the previous obstacle recognition result is high.
- the recognition result may be controlled to be registered as the new current obstacle recognition result.
- the driving control module 141 may control the driving of the driving unit 160 based on the recognized final attribute of the obstacle (S2980).
- the driving control module 141 may control the driving to bypass the obstacle.
- the driving control module 141 may control to continue running straight in the case of the obstacle of the height that the recognized obstacle can be over, such as a low height of the barrier.
- the driving control module 141 may be controlled to bypass the obstacle when the obstacle that is likely to be restrained when moving even a low height obstacle such as a pedestal, a human hair, a power strip, and an electric wire of the fan.
- the obstacle recognition module 144 may register and manage the obstacle as a dangerous obstacle or a non-hazardous obstacle according to the final attribute of the determined obstacle (S2970).
- the obstacle recognition module 144 when the obstacle recognition module 144 is an obstacle having a height that cannot be exceeded or an obstacle of low height such as a pedestal, a human hair, a power strip, and an electric wire of the airflow, the obstacle may be constrained when moving.
- the recognized obstacle may be registered on the map as a dangerous obstacle.
- an obstacle such as a barrier that can be crossed even when driving straight may be registered on the map as a non-hazardous obstacle.
- the mobile robot 100 may travel to avoid the non-hazardous obstacle based on the dangerous obstacle and the non-hazardous obstacle information registered in the map.
- 30 to 34 are views referred to for describing a method for controlling a mobile robot according to an embodiment of the present invention.
- the mobile robot 100 may detect an obstacle 3001 while driving.
- the obstacle robot may be set and registered to include the recognized obstacle.
- the obstacle recognition module 144 may include an obstacle area 3010 having a predetermined size aXb around the obstacle 3001 of the entire area 3000 on a map stored in the storage 150. Can be controlled to register.
- the obstacle area 3010 may be set in the shape of a rectangle having a total size of 1m x 1m with 50cm in front and rear, 50m in left and right, respectively, around the obstacle 3001.
- the obstacle recognition module 144 may control to store the location information of the mobile robot 100 at the time of recognizing the obstacle 3001 together with the registration of the obstacle area 3010. Meanwhile, the location information of the mobile robot 100 may be information based on Odometry information.
- the obstacle recognition module 144 may control to store the location of the obstacle and the location information of the mobile robot in the following storage format.
- the image acquisition unit 120 may acquire a plurality of images in the obstacle area 3010 during the movement.
- the image acquisition unit 120 may acquire a plurality of images by continuously photographing in the obstacle area 3010 before the mobile robot 100 leaves the obstacle area 3010.
- FIG. 31 illustrates that a predetermined area of the obstacle area 3010 is recognized by two shootings. Circles 3110 and 3120 illustrated in FIG. 31 indicate recognition coverage, respectively, and their size and shape may vary depending on the configuration and performance of the image acquisition unit 120.
- the mobile robot 100 may photograph the obstacle 3001 at another position of the obstacle area 3010.
- the present invention by moving the obstacle area 3010 while acquiring images of the obstacle 3001 at various locations and performing the obstacle recognition process for each of the acquired images, it is possible to improve the accuracy of obstacle recognition. Can be.
- the obstacle recognition module 144 may be managed by dividing the inner region of the obstacle area 3010 at regular intervals in a horizontal shape, vertically.
- FIG. 32 illustrates an obstacle area 3200 configured in the form of a 4 ⁇ 4 grid.
- the obstacle recognition module 144 may classify and store an area in which an obstacle is recognized by taking an image in the obstacle area 3200.
- the recognized area and the unrecognized area may be distinguished from the entire obstacle area 3200 by changing the value of the recognized predetermined area 3210, and the entire obstacle area 3200 may be distinguished. ), It is possible to determine how far the recognition is completed.
- the recognized areas 3210, 3220, 3230, and 3240 may increase as illustrated in FIG. 32B.
- the obstacle recognition process may be terminated.
- the obstacle recognition process may be terminated.
- the obstacle recognition module 144 may recognize the obstacle as a fan by recognizing the image acquired by the mobile robot at the first position Rp1.
- the confidence value of the current recognition result recognized by the fan may be 0.8.
- the registration result corresponding to the current recognition result may also be a fan.
- the obstacle recognition module 144 may obtain the recognition result determined by the fan having the confidence value of 0.7.
- the obstacle recognition module 144 may maintain the fan as a registration result corresponding to the current recognition result because the first recognition result and the second recognition result are the same.
- one of the confidence values of the two recognition results may be used as it is.
- a mean value 0.75 obtained by averaging the confidence values of the first recognition result and the second recognition result may be used as the confidence value corresponding to the current recognition result.
- a confidence value of 0.835 which is increased by reflecting a predetermined weight, may be used as a confidence value corresponding to the current recognition result as shown in FIG.
- the obstacle recognition module 144 determines that the obstacle recognition module 144 is a lamp bearing having a confidence value of 0.7 based on the image obtained by the mobile robot at the third position Rp3. A recognition result can be obtained.
- the obstacle recognition module 144 may compare the third recognition result with the previous recognition result.
- the obstacle recognition module 144 may select a fan having a higher confidence value as the registration result corresponding to the current recognition result.
- image data capable of increasing the accuracy of obstacle attribute recognition may be acquired.
- the mobile robot may determine the property of the obstacle and adjust the driving pattern according to the property of the obstacle, thereby performing a reliable obstacle recognition and avoiding operation.
- the mobile robot can perform machine learning efficiently and extract data that can be used for obstacle property recognition.
- the mobile robot according to the present invention may not be limitedly applied to the configuration and method of the embodiments described as described above, and the embodiments may be selectively combined with each or all of the embodiments so that various modifications may be made. It may be configured.
- the control method of the mobile robot it is possible to implement as a processor-readable code on a processor-readable recording medium.
- the processor-readable recording medium includes all kinds of recording devices that store data that can be read by the processor. Examples of the processor-readable recording medium include ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like, and may also be implemented in the form of a carrier wave such as transmission over the Internet. .
- the processor-readable recording medium can also be distributed over network coupled computer systems so that the processor-readable code is stored and executed in a distributed fashion.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mechanical Engineering (AREA)
- Robotics (AREA)
- Multimedia (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Automation & Control Theory (AREA)
- Aviation & Aerospace Engineering (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Electromagnetism (AREA)
- Medical Informatics (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Databases & Information Systems (AREA)
- Biodiversity & Conservation Biology (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Computer Networks & Wireless Communication (AREA)
- Mathematical Physics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
Description
Claims (20)
- 본체를 이동시키는 주행부;상기 본체 주변을 연속적으로 촬영하여 복수의 영상을 획득하는 영상획득부;상기 영상획득부가 획득하는 연속된 복수의 영상을 저장하는 저장부;상기 이동 중 장애물을 감지하는 하나 이상의 센서를 포함하는 센서부; 및상기 센서부가 장애물을 감지하면, 상기 본체의 이동 방향과 이동 속도에 기초하여, 상기 연속된 복수의 영상 중 상기 센서부의 장애물 감지 시점 이전의 특정 시점 영상을 선택하고, 상기 선택된 특정 시점 영상에 포함되는 장애물의 속성을 인식하는 장애물인식모듈을 포함하는 제어부;를 포함하는 이동 로봇.
- 제1항에 있어서,상기 제어부는,상기 인식된 장애물의 속성에 기초하여, 상기 주행부의 구동을 제어하는 주행제어모듈을 더 포함하는 것을 특징으로 하는 이동 로봇.
- 제1항에 있어서,상기 제어부는,상기 이동 방향이 직진 주행이거나 소정 기준치 미만의 회전 주행이 경우에, 상기 연속된 복수의 영상 중 상기 센서부의 장애물 감지 시점 이전의 특정 시점 영상을 선택하는 것을 특징으로 하는 이동 로봇.
- 제1항에 있어서,상기 제어부는,상기 이동 속도가 느릴수록 상기 센서부의 장애물 감지 시점을 기준으로 더 과거 시점의 영상을 선택하는 것을 특징으로 하는 이동 로봇.
- 제1항에 있어서,상기 장애물인식모듈은,머신 러닝(machine learning)으로 기학습된 데이터에 기초하여 상기 선택된 특정 시점 영상에 포함되는 장애물의 속성을 인식하는 것을 특징으로 하는 이동 로봇.
- 제1항에 있어서,상기 선택된 특정 시점 영상을 소정 서버로 전송하고, 상기 소정 서버로부터 머신 러닝과 관련된 데이터를 수신하는 통신부;를 더 포함하는 것을 특징으로 하는 이동 로봇.
- 제6항에 있어서,상기 소정 서버로부터 수신된 머신 러닝과 관련된 데이터에 기초하여 상기 장애물인식모듈을 업데이트(update)하는 것을 특징으로 하는 이동 로봇.
- 제1항에 있어서,상기 제어부는,상기 센서부가 감지하는 장애물의 방향에 대응하여 상기 선택된 특정 시점 영상의 일부 영역을 추출하는 영상처리모듈을 더 포함하는 것을 특징으로 하는 이동 로봇.
- 제8항에 있어서,상기 영상처리모듈은,상기 장애물이 상기 본체 전면의 우측 방향에서 감지되는 경우에, 상기 선택된 특정 시점 영상의 우측하단 영역을 추출하고,상기 장애물이 상기 본체 전면의 좌측 방향에서 감지되는 경우에, 상기 선택된 특정 시점 영상의 좌측하단 영역을 추출하며,상기 장애물이 상기 본체 전면 방향에서 감지되는 경우에, 상기 선택된 특정 시점 영상의 중앙(center)하단 영역을 추출하는 것을 특징으로 하는 이동 로봇.
- 제1항에 있어서,상기 제어부는,상기 감지된 장애물의 위치 정보와 이동 로봇의 위치 정보를 상기 저장부에 저장하며, 상기 감지된 장애물의 위치를 중심으로 소정 크기를 가지는 영역을 맵(map)에 장애물지역으로 등록하고, 소정 장애물지역에서 상기 영상획득부를 통하여 획득되는 영상들에 대해서 순차적으로 장애물의 속성을 인식하며, 상기 순차적으로 인식된 복수의 인식 결과에 기초하여 상기 장애물의 최종 속성을 판별하는 것을 특징으로 하는 이동 로봇.
- 영상획득부를 통하여, 이동 중 본체 주변을 연속적으로 촬영하여 복수의 영상을 획득하는 단계;상기 영상획득부가 획득하는 연속된 복수의 영상을 저장하는 단계;센서부를 통하여, 장애물을 감지하는 단계;상기 센서부가 장애물을 감지하면, 상기 본체의 이동 방향과 이동 속도에 기초하여, 상기 연속된 복수의 영상 중 상기 센서부의 장애물 감지 시점 이전의 특정 시점 영상을 선택하는 단계;상기 선택된 특정 시점 영상에 포함되는 장애물의 속성을 인식하는 단계; 및,상기 인식된 장애물의 속성에 기초하여, 주행부의 구동을 제어하는 단계;를 포함하는 이동 로봇의 제어방법.
- 제11항에 있어서,상기 특정 시점 영상 선택 단계는, 상기 이동 방향이 직진 주행이거나 소정 기준치 미만의 회전 주행이 경우에, 상기 연속된 복수의 영상 중 상기 센서부의 장애물 감지 시점 이전의 특정 시점 영상을 선택하는 것을 특징으로 하는 이동 로봇의 제어방법.
- 제11항에 있어서,상기 특정 시점 영상 선택 단계는, 상기 이동의 속도가 느릴수록 상기 센서부의 장애물 감지 시점을 기준으로 더 과거 시점의 영상을 선택하는 것을 특징으로 하는 이동 로봇의 제어방법.
- 제11항에 있어서,상기 장애물 속성 인식 단계는,머신 러닝(machine learning)으로 기학습된 데이터에 기초하여 상기 선택된 특정 시점 영상에 포함되는 장애물의 속성을 인식하는 것을 특징으로 하는 이동 로봇의 제어방법.
- 제11항에 있어서,상기 선택된 특정 시점 영상을 소정 서버로 전송하는 단계;상기 소정 서버로부터 머신 러닝과 관련된 데이터를 수신하는 단계;를 더 포함하는 이동 로봇의 제어방법.
- 제15항에 있어서,상기 소정 서버로부터 수신된 머신 러닝과 관련된 데이터에 기초하여 상기 장애물인식모듈을 업데이트(update)하는 단계;를 더 포함하는 이동 로봇의 제어방법.
- 제11항에 있어서,상기 센서부가 감지하는 장애물의 방향에 대응하여 상기 선택된 특정 시점 영상의 일부 영역을 추출하는 단계;를 더 포함하는 이동 로봇의 제어방법.
- 제17항에 있어서,상기 선택된 특정 시점 영상의 일부 영역을 추출하는 단계는,상기 장애물이 상기 본체 전면의 우측 방향에서 감지되는 경우에, 상기 영상획득부가 획득하는 영상의 우측하단 영역을 추출하고,상기 장애물이 상기 본체 전면의 좌측 방향에서 감지되는 경우에, 상기 영상획득부가 획득하는 영상의 좌측하단 영역을 추출하며,상기 장애물이 상기 본체 전면 방향에서 감지되는 경우에, 상기 영상획득부가 획득하는 영상의 중앙(center)하단 영역을 추출하는 것을 특징으로 하는 이동 로봇의 제어방법.
- 제11항에 있어서,상기 구동 제어 단계는,상기 감지된 장애물이 진행해도 되는 장애물이 아닌 경우에, 회피 동작을 수행하도록 제어하는 것을 특징으로 하는 이동 로봇의 제어방법.
- 제11항에 있어서,상기 센서부가 장애물을 감지하면, 상기 감지된 장애물의 위치 정보와 이동 로봇의 위치 정보를 저장하며, 상기 감지된 장애물의 위치를 중심으로 소정 크기를 가지는 영역을 맵(map)에 장애물지역으로 등록하는 단계;상기 장애물지역을 이동하면서 상기 영상획득부를 통하여 획득되는 영상들에 대해서 순차적으로 장애물의 속성을 인식하는 단계;상기 순차적으로 인식된 복수의 인식 결과에 기초하여 상기 장애물의 최종 속성을 판별하는 단계;를 더 포함하는 이동 로봇의 제어 방법.
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2019510776A JP7055127B2 (ja) | 2016-08-25 | 2017-08-24 | 移動ロボット及びその制御方法 |
CN201780066283.1A CN109890576B (zh) | 2016-08-25 | 2017-08-24 | 移动机器人及其控制方法 |
AU2017316091A AU2017316091B2 (en) | 2016-08-25 | 2017-08-24 | Mobile robot and control method therefor |
US16/327,454 US11199852B2 (en) | 2016-08-25 | 2017-08-24 | Mobile robot and control method for controlling the same |
EP17843973.3A EP3505312B1 (en) | 2016-08-25 | 2017-08-24 | Mobile robot and control method therefor |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2016-0108385 | 2016-08-25 | ||
KR10-2016-0108386 | 2016-08-25 | ||
KR1020160108384A KR102548936B1 (ko) | 2016-08-25 | 2016-08-25 | 인공지능 이동 로봇 및 그 제어방법 |
KR1020160108385A KR20180023302A (ko) | 2016-08-25 | 2016-08-25 | 이동 로봇 및 그 제어방법 |
KR10-2016-0108384 | 2016-08-25 | ||
KR1020160108386A KR20180023303A (ko) | 2016-08-25 | 2016-08-25 | 이동 로봇 및 그 제어방법 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018038553A1 true WO2018038553A1 (ko) | 2018-03-01 |
Family
ID=61245133
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2017/009258 WO2018038552A1 (ko) | 2016-08-25 | 2017-08-24 | 이동 로봇 및 그 제어방법 |
PCT/KR2017/009257 WO2018038551A1 (ko) | 2016-08-25 | 2017-08-24 | 이동 로봇 및 그 제어방법 |
PCT/KR2017/009260 WO2018038553A1 (ko) | 2016-08-25 | 2017-08-24 | 이동 로봇 및 그 제어방법 |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2017/009258 WO2018038552A1 (ko) | 2016-08-25 | 2017-08-24 | 이동 로봇 및 그 제어방법 |
PCT/KR2017/009257 WO2018038551A1 (ko) | 2016-08-25 | 2017-08-24 | 이동 로봇 및 그 제어방법 |
Country Status (6)
Country | Link |
---|---|
US (3) | US11150666B2 (ko) |
EP (3) | EP3505310B1 (ko) |
JP (3) | JP6785950B2 (ko) |
CN (3) | CN109890574B (ko) |
AU (3) | AU2017316091B2 (ko) |
WO (3) | WO2018038552A1 (ko) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109452914A (zh) * | 2018-11-01 | 2019-03-12 | 北京石头世纪科技有限公司 | 智能清洁设备,清洁模式选择方法,计算机存储介质 |
CN109760060A (zh) * | 2019-03-02 | 2019-05-17 | 安徽理工大学 | 一种多自由度机器人智能避障方法及其系统 |
WO2019210198A1 (en) * | 2018-04-26 | 2019-10-31 | Maidbot, Inc. | Automated robot alert system |
WO2019210157A1 (en) * | 2018-04-26 | 2019-10-31 | Maidbot, Inc. | Robot contextualization of map regions |
RU2709523C1 (ru) * | 2019-02-19 | 2019-12-18 | Общество с ограниченной ответственностью "ПРОМОБОТ" | Система определения препятствий движению робота |
US20210209367A1 (en) * | 2018-05-22 | 2021-07-08 | Starship Technologies Oü | Method and system for analyzing robot surroundings |
Families Citing this family (45)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018038552A1 (ko) * | 2016-08-25 | 2018-03-01 | 엘지전자 주식회사 | 이동 로봇 및 그 제어방법 |
USD906390S1 (en) * | 2017-12-14 | 2020-12-29 | The Hi-Tech Robotic Systemz Ltd | Mobile robot |
USD896858S1 (en) * | 2017-12-14 | 2020-09-22 | The Hi-Tech Robotic Systemz Ltd | Mobile robot |
USD907084S1 (en) * | 2017-12-14 | 2021-01-05 | The Hi-Tech Robotic Systemz Ltd | Mobile robot |
USD879851S1 (en) * | 2017-12-29 | 2020-03-31 | Beijing Geekplus Technology Co., Ltd. | Robot |
USD879852S1 (en) * | 2018-03-15 | 2020-03-31 | Beijing Geekplus Technology Co., Ltd. | Mobile robot |
JP7194914B2 (ja) * | 2018-03-29 | 2022-12-23 | パナソニックIpマネジメント株式会社 | 自律移動掃除機、自律移動掃除機による掃除方法、及び自律移動掃除機用プログラム |
KR102519064B1 (ko) | 2018-04-25 | 2023-04-06 | 삼성전자주식회사 | 사용자에게 서비스를 제공하는 이동형 로봇 장치 및 방법 |
KR102100474B1 (ko) * | 2018-04-30 | 2020-04-13 | 엘지전자 주식회사 | 인공지능 청소기 및 그 제어방법 |
WO2019212239A1 (en) * | 2018-05-04 | 2019-11-07 | Lg Electronics Inc. | A plurality of robot cleaner and a controlling method for the same |
CN110470296A (zh) * | 2018-05-11 | 2019-11-19 | 珠海格力电器股份有限公司 | 一种定位方法、定位机器人及计算机存储介质 |
US10885395B2 (en) * | 2018-06-17 | 2021-01-05 | Pensa Systems | Method for scaling fine-grained object recognition of consumer packaged goods |
CN108968811A (zh) * | 2018-06-20 | 2018-12-11 | 四川斐讯信息技术有限公司 | 一种扫地机器人的物体识别方法及系统 |
USD911406S1 (en) * | 2018-08-17 | 2021-02-23 | Grey Orange Pte. Ltd | Robot for moving articles within a facility |
KR102577785B1 (ko) * | 2018-09-20 | 2023-09-13 | 삼성전자주식회사 | 청소 로봇 및 그의 태스크 수행 방법 |
US11338438B2 (en) * | 2019-01-25 | 2022-05-24 | Bear Robotics, Inc. | Method, system and non-transitory computer-readable recording medium for determining a movement path of a robot |
WO2020180051A1 (en) * | 2019-03-07 | 2020-09-10 | Samsung Electronics Co., Ltd. | Electronic apparatus and control method thereof |
TWI683197B (zh) * | 2019-03-19 | 2020-01-21 | 東元電機股份有限公司 | 移動平台圖資校正系統 |
DE102019109596A1 (de) * | 2019-04-11 | 2020-10-15 | Vorwerk & Co. Interholding Gmbh | System aus einem manuell geführten Bodenbearbeitungsgerät, einem ausschließlich automatisch betriebenen Bodenbearbeitungsgerät und einer Recheneinrichtung |
CN110315553B (zh) * | 2019-06-23 | 2023-10-27 | 大国重器自动化设备(山东)股份有限公司 | 一种餐厅用机器人防碰撞系统及方法 |
CN110315537A (zh) * | 2019-06-27 | 2019-10-11 | 深圳乐动机器人有限公司 | 一种控制机器人运动的方法、装置及机器人 |
CN112149458A (zh) * | 2019-06-27 | 2020-12-29 | 商汤集团有限公司 | 障碍物检测方法、智能驾驶控制方法、装置、介质及设备 |
US11650597B2 (en) * | 2019-07-09 | 2023-05-16 | Samsung Electronics Co., Ltd. | Electronic apparatus for identifying object through warped image and control method thereof |
WO2021006622A1 (en) * | 2019-07-09 | 2021-01-14 | Samsung Electronics Co., Ltd. | Electronic apparatus and controlling method thereof |
CN110353583A (zh) * | 2019-08-21 | 2019-10-22 | 追创科技(苏州)有限公司 | 扫地机器人及扫地机器人的自动控制方法 |
US11426885B1 (en) * | 2019-08-27 | 2022-08-30 | X Development Llc | Robot docking station identification surface |
KR20210028426A (ko) | 2019-09-04 | 2021-03-12 | 엘지전자 주식회사 | 로봇 청소기 및 그 제어방법 |
CN110543436B (zh) * | 2019-09-06 | 2021-09-24 | 北京云迹科技有限公司 | 一种机器人的数据获取方法及装置 |
KR20210039232A (ko) * | 2019-10-01 | 2021-04-09 | 엘지전자 주식회사 | 로봇 청소기 및 청소 경로를 결정하기 위한 방법 |
JP7226575B2 (ja) * | 2019-10-01 | 2023-02-21 | 日本電信電話株式会社 | 通信端末及び通信品質予測方法 |
KR20210057582A (ko) * | 2019-11-12 | 2021-05-21 | 삼성전자주식회사 | 잘못 흡입된 객체를 식별하는 로봇 청소기 및 그 제어 방법 |
CN110861089B (zh) * | 2019-11-29 | 2020-11-06 | 北京理工大学 | 一种多机器人系统任务均衡分配协同工作控制方法 |
CN111538329B (zh) * | 2020-04-09 | 2023-02-28 | 北京石头创新科技有限公司 | 一种图像查看方法、终端及清洁机 |
CN111715559A (zh) * | 2020-06-22 | 2020-09-29 | 柴诚芃 | 一种基于机器视觉的垃圾分拣系统 |
CN112077840B (zh) * | 2020-08-08 | 2022-02-15 | 浙江科聪控制技术有限公司 | 一种防爆巡检机器人的避障方法及应用于该方法的机器人 |
CN112380942B (zh) * | 2020-11-06 | 2024-08-06 | 北京石头创新科技有限公司 | 一种识别障碍物的方法、装置、介质和电子设备 |
KR102361338B1 (ko) * | 2020-11-27 | 2022-02-15 | (주)엘이디소프트 | 자율주행이 가능한 실내 uv 살균장치 |
USD967883S1 (en) * | 2021-01-06 | 2022-10-25 | Grey Orange International Inc. | Robot for handling goods in a facility |
KR20220111526A (ko) * | 2021-02-02 | 2022-08-09 | 자이메드 주식회사 | 실시간 생체 이미지 인식 방법 및 장치 |
CN112971616B (zh) * | 2021-02-07 | 2022-12-30 | 美智纵横科技有限责任公司 | 一种充电座规避方法、装置、扫地机器人及存储介质 |
WO2023003158A1 (ko) * | 2021-07-20 | 2023-01-26 | 삼성전자주식회사 | 로봇 및 그 제어 방법 |
CN113721603B (zh) * | 2021-07-29 | 2023-08-08 | 云鲸智能(深圳)有限公司 | 基站探索方法、装置、机器人及可读存储介质 |
CN113827152B (zh) * | 2021-08-30 | 2023-02-17 | 北京盈迪曼德科技有限公司 | 区域状态确定方法、装置及机器人 |
CN114670244B (zh) * | 2022-03-29 | 2023-10-20 | 中国铁建重工集团股份有限公司 | 一种结构制造方法及装置 |
CN116038716B (zh) * | 2023-03-14 | 2023-07-18 | 煤炭科学研究总院有限公司 | 机器人的控制方法和机器人的控制模型的训练方法 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100669892B1 (ko) | 2005-05-11 | 2007-01-19 | 엘지전자 주식회사 | 장애물 회피 기능을 갖는 이동로봇과 그 방법 |
KR20090112984A (ko) * | 2008-04-25 | 2009-10-29 | 포항공과대학교 산학협력단 | 장애물 감지 센서를 구비한 이동로봇 |
KR20120109247A (ko) * | 2011-03-28 | 2012-10-08 | 고려대학교 산학협력단 | 이동 로봇의 장애물 회피 시스템 |
KR20130042389A (ko) * | 2011-10-18 | 2013-04-26 | 엘지전자 주식회사 | 이동 로봇 및 이의 제어 방법 |
US20150336274A1 (en) * | 2014-05-20 | 2015-11-26 | International Business Machines Corporation | Information Technology Asset Type Identification Using a Mobile Vision-Enabled Robot |
KR20150142475A (ko) * | 2014-06-12 | 2015-12-22 | 연세대학교 산학협력단 | 장애물 식별 장치 및 방법 |
Family Cites Families (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5314037A (en) * | 1993-01-22 | 1994-05-24 | Shaw David C H | Automobile collision avoidance system |
JPH1132253A (ja) * | 1997-07-08 | 1999-02-02 | Hitachi Ltd | 画像処理装置 |
JP3529049B2 (ja) * | 2002-03-06 | 2004-05-24 | ソニー株式会社 | 学習装置及び学習方法並びにロボット装置 |
CN100509308C (zh) * | 2002-03-15 | 2009-07-08 | 索尼公司 | 用于机器人的行为控制系统和行为控制方法及机器人装置 |
EP1541295A1 (en) * | 2002-08-26 | 2005-06-15 | Sony Corporation | Environment identification device, environment identification method, and robot device |
US7689321B2 (en) * | 2004-02-13 | 2010-03-30 | Evolution Robotics, Inc. | Robust sensor fusion for mapping and localization in a simultaneous localization and mapping (SLAM) system |
KR100671897B1 (ko) * | 2005-04-04 | 2007-01-24 | 주식회사 대우일렉트로닉스 | 스위치형 센서가 구비된 로봇 청소기 |
JP2007011857A (ja) * | 2005-07-01 | 2007-01-18 | Toyota Motor Corp | 自律移動型ロボット及び自律移動型ロボットの制御方法 |
KR100811886B1 (ko) * | 2006-09-28 | 2008-03-10 | 한국전자통신연구원 | 장애물 회피 진행이 가능한 자율이동로봇 및 그 방법 |
JP2008172441A (ja) * | 2007-01-10 | 2008-07-24 | Omron Corp | 検出装置および方法、並びに、プログラム |
KR100877072B1 (ko) * | 2007-06-28 | 2009-01-07 | 삼성전자주식회사 | 이동 로봇을 위한 맵 생성 및 청소를 동시에 수행하는 방법및 장치 |
KR100902929B1 (ko) * | 2007-07-18 | 2009-06-15 | 엘지전자 주식회사 | 이동 로봇 및 그 제어방법 |
CN101271525B (zh) * | 2008-04-10 | 2011-05-04 | 复旦大学 | 一种快速的图像序列特征显著图获取方法 |
KR100997656B1 (ko) | 2008-06-18 | 2010-12-01 | 한양대학교 산학협력단 | 로봇의 장애물 검출방법 및 그 장치 |
KR101495333B1 (ko) | 2008-07-02 | 2015-02-25 | 삼성전자 주식회사 | 장애물 검출 장치 및 방법 |
JP5106356B2 (ja) | 2008-11-17 | 2012-12-26 | セコム株式会社 | 画像監視装置 |
KR101524020B1 (ko) * | 2009-03-06 | 2015-05-29 | 엘지전자 주식회사 | 로봇 청소기의 점진적 지도 작성 및 위치 보정 방법 |
TWI388956B (zh) * | 2009-05-20 | 2013-03-11 | Univ Nat Taiwan Science Tech | 行動機器人與其目標物處理路徑的規劃方法 |
US9014848B2 (en) * | 2010-05-20 | 2015-04-21 | Irobot Corporation | Mobile robot system |
US8447863B1 (en) * | 2011-05-06 | 2013-05-21 | Google Inc. | Systems and methods for object recognition |
JP5472479B2 (ja) * | 2011-06-15 | 2014-04-16 | 東レ株式会社 | 複合繊維 |
US9582000B2 (en) * | 2011-09-07 | 2017-02-28 | Lg Electronics Inc. | Robot cleaner, and system and method for remotely controlling the same |
US8311973B1 (en) * | 2011-09-24 | 2012-11-13 | Zadeh Lotfi A | Methods and systems for applications for Z-numbers |
US8396254B1 (en) * | 2012-02-09 | 2013-03-12 | Google Inc. | Methods and systems for estimating a location of a robot |
KR101949277B1 (ko) | 2012-06-18 | 2019-04-25 | 엘지전자 주식회사 | 이동 로봇 |
KR20140031742A (ko) * | 2012-09-05 | 2014-03-13 | 삼성전기주식회사 | 이미지 특징 추출 장치 및 이미지 특징 추출 방법, 그를 이용한 영상 처리 시스템 |
KR101490170B1 (ko) * | 2013-03-05 | 2015-02-05 | 엘지전자 주식회사 | 로봇 청소기 |
KR101450569B1 (ko) * | 2013-03-05 | 2014-10-14 | 엘지전자 주식회사 | 로봇 청소기 |
KR101395888B1 (ko) * | 2013-03-21 | 2014-05-27 | 엘지전자 주식회사 | 로봇 청소기 및 그 동작방법 |
WO2015006224A1 (en) * | 2013-07-08 | 2015-01-15 | Vangogh Imaging, Inc. | Real-time 3d computer vision processing engine for object recognition, reconstruction, and analysis |
KR102152641B1 (ko) * | 2013-10-31 | 2020-09-08 | 엘지전자 주식회사 | 이동 로봇 |
US9504367B2 (en) * | 2013-11-20 | 2016-11-29 | Samsung Electronics Co., Ltd. | Cleaning robot and method for controlling the same |
KR102016551B1 (ko) * | 2014-01-24 | 2019-09-02 | 한화디펜스 주식회사 | 위치 추정 장치 및 방법 |
KR102158695B1 (ko) * | 2014-02-12 | 2020-10-23 | 엘지전자 주식회사 | 로봇 청소기 및 이의 제어방법 |
US9713982B2 (en) * | 2014-05-22 | 2017-07-25 | Brain Corporation | Apparatus and methods for robotic operation using video imagery |
JP2016028311A (ja) * | 2014-07-10 | 2016-02-25 | 株式会社リコー | ロボット、プログラム、及び記憶媒体 |
US10289910B1 (en) * | 2014-07-10 | 2019-05-14 | Hrl Laboratories, Llc | System and method for performing real-time video object recognition utilizing convolutional neural networks |
KR101610502B1 (ko) * | 2014-09-02 | 2016-04-07 | 현대자동차주식회사 | 자율주행차량의 주행환경 인식장치 및 방법 |
CN104268882A (zh) * | 2014-09-29 | 2015-01-07 | 深圳市热活力科技有限公司 | 基于双线阵摄像头的高速运动物体检测与测速方法及系统 |
US9704043B2 (en) | 2014-12-16 | 2017-07-11 | Irobot Corporation | Systems and methods for capturing images and annotating the captured images with information |
CN104552341B (zh) * | 2014-12-29 | 2016-05-04 | 国家电网公司 | 移动工业机器人单点多视角挂表位姿误差检测方法 |
JP6393199B2 (ja) * | 2015-01-22 | 2018-09-19 | 株式会社日本総合研究所 | 歩行障害箇所判定システム及びその自走型走行装置 |
US9987752B2 (en) * | 2016-06-10 | 2018-06-05 | Brain Corporation | Systems and methods for automatic detection of spills |
WO2018038552A1 (ko) * | 2016-08-25 | 2018-03-01 | 엘지전자 주식회사 | 이동 로봇 및 그 제어방법 |
-
2017
- 2017-08-24 WO PCT/KR2017/009258 patent/WO2018038552A1/ko unknown
- 2017-08-24 US US16/327,449 patent/US11150666B2/en active Active
- 2017-08-24 AU AU2017316091A patent/AU2017316091B2/en active Active
- 2017-08-24 EP EP17843971.7A patent/EP3505310B1/en active Active
- 2017-08-24 JP JP2019510871A patent/JP6785950B2/ja active Active
- 2017-08-24 CN CN201780066246.0A patent/CN109890574B/zh active Active
- 2017-08-24 EP EP17843973.3A patent/EP3505312B1/en active Active
- 2017-08-24 CN CN201780066252.6A patent/CN109890575B/zh active Active
- 2017-08-24 AU AU2017316089A patent/AU2017316089B2/en active Active
- 2017-08-24 CN CN201780066283.1A patent/CN109890576B/zh active Active
- 2017-08-24 JP JP2019510881A patent/JP6861797B2/ja active Active
- 2017-08-24 US US16/327,452 patent/US20190179333A1/en not_active Abandoned
- 2017-08-24 WO PCT/KR2017/009257 patent/WO2018038551A1/ko unknown
- 2017-08-24 EP EP17843972.5A patent/EP3505311B1/en active Active
- 2017-08-24 WO PCT/KR2017/009260 patent/WO2018038553A1/ko unknown
- 2017-08-24 US US16/327,454 patent/US11199852B2/en active Active
- 2017-08-24 AU AU2017316090A patent/AU2017316090B2/en active Active
- 2017-08-24 JP JP2019510776A patent/JP7055127B2/ja active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100669892B1 (ko) | 2005-05-11 | 2007-01-19 | 엘지전자 주식회사 | 장애물 회피 기능을 갖는 이동로봇과 그 방법 |
KR20090112984A (ko) * | 2008-04-25 | 2009-10-29 | 포항공과대학교 산학협력단 | 장애물 감지 센서를 구비한 이동로봇 |
KR20120109247A (ko) * | 2011-03-28 | 2012-10-08 | 고려대학교 산학협력단 | 이동 로봇의 장애물 회피 시스템 |
KR20130042389A (ko) * | 2011-10-18 | 2013-04-26 | 엘지전자 주식회사 | 이동 로봇 및 이의 제어 방법 |
US20150336274A1 (en) * | 2014-05-20 | 2015-11-26 | International Business Machines Corporation | Information Technology Asset Type Identification Using a Mobile Vision-Enabled Robot |
KR20150142475A (ko) * | 2014-06-12 | 2015-12-22 | 연세대학교 산학협력단 | 장애물 식별 장치 및 방법 |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019210198A1 (en) * | 2018-04-26 | 2019-10-31 | Maidbot, Inc. | Automated robot alert system |
WO2019210157A1 (en) * | 2018-04-26 | 2019-10-31 | Maidbot, Inc. | Robot contextualization of map regions |
US11113945B2 (en) | 2018-04-26 | 2021-09-07 | Maidbot, Inc. | Automated robot alert system |
US11798390B2 (en) | 2018-04-26 | 2023-10-24 | Tailos, Inc. | Automated robot alert system |
US20210209367A1 (en) * | 2018-05-22 | 2021-07-08 | Starship Technologies Oü | Method and system for analyzing robot surroundings |
US11741709B2 (en) * | 2018-05-22 | 2023-08-29 | Starship Technologies Oü | Method and system for analyzing surroundings of an autonomous or semi-autonomous vehicle |
CN109452914A (zh) * | 2018-11-01 | 2019-03-12 | 北京石头世纪科技有限公司 | 智能清洁设备,清洁模式选择方法,计算机存储介质 |
RU2709523C1 (ru) * | 2019-02-19 | 2019-12-18 | Общество с ограниченной ответственностью "ПРОМОБОТ" | Система определения препятствий движению робота |
CN109760060A (zh) * | 2019-03-02 | 2019-05-17 | 安徽理工大学 | 一种多自由度机器人智能避障方法及其系统 |
CN109760060B (zh) * | 2019-03-02 | 2021-06-08 | 安徽理工大学 | 一种多自由度机器人智能避障方法及其系统 |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018038553A1 (ko) | 이동 로봇 및 그 제어방법 | |
WO2018097574A1 (ko) | 이동 로봇 및 그 제어방법 | |
WO2018139796A1 (ko) | 이동 로봇 및 그의 제어 방법 | |
AU2020247141B2 (en) | Mobile robot and method of controlling the same | |
WO2021010757A1 (en) | Mobile robot and control method thereof | |
WO2018155999A2 (en) | Moving robot and control method thereof | |
WO2017188706A1 (ko) | 이동 로봇 및 이동 로봇의 제어방법 | |
WO2020218652A1 (ko) | 공기 청정기 | |
WO2021006677A2 (en) | Mobile robot using artificial intelligence and controlling method thereof | |
WO2019151845A2 (ko) | 에어컨 | |
WO2019083291A1 (ko) | 장애물을 학습하는 인공지능 이동 로봇 및 그 제어방법 | |
AU2020244635B2 (en) | Mobile robot control method | |
WO2017188800A1 (ko) | 이동 로봇 및 그 제어방법 | |
WO2021006542A1 (en) | Mobile robot using artificial intelligence and controlling method thereof | |
WO2016028021A1 (ko) | 청소 로봇 및 그 제어 방법 | |
WO2019212240A1 (en) | A plurality of robot cleaner and a controlling method for the same | |
WO2022075614A1 (ko) | 이동 로봇 시스템 | |
WO2020241951A1 (ko) | 인공지능 학습방법 및 이를 이용하는 로봇의 동작방법 | |
WO2021020621A1 (ko) | 인공지능 무빙 에이전트 | |
WO2018117616A1 (ko) | 이동 로봇 | |
WO2022075615A1 (ko) | 이동 로봇 시스템 | |
WO2022075610A1 (ko) | 이동 로봇 시스템 | |
WO2021006553A1 (en) | Moving robot and control method thereof | |
WO2019004773A1 (ko) | 이동 단말기 및 이를 포함하는 로봇 시스템 | |
WO2022075616A1 (ko) | 이동 로봇 시스템 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17843973 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2019510776 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2017843973 Country of ref document: EP Effective date: 20190325 |
|
ENP | Entry into the national phase |
Ref document number: 2017316091 Country of ref document: AU Date of ref document: 20170824 Kind code of ref document: A |