WO2021000630A1 - 更新移动机器人工作地图的方法、装置及存储介质 - Google Patents

更新移动机器人工作地图的方法、装置及存储介质 Download PDF

Info

Publication number
WO2021000630A1
WO2021000630A1 PCT/CN2020/085231 CN2020085231W WO2021000630A1 WO 2021000630 A1 WO2021000630 A1 WO 2021000630A1 CN 2020085231 W CN2020085231 W CN 2020085231W WO 2021000630 A1 WO2021000630 A1 WO 2021000630A1
Authority
WO
WIPO (PCT)
Prior art keywords
mobile robot
map
environment
detection
current
Prior art date
Application number
PCT/CN2020/085231
Other languages
English (en)
French (fr)
Inventor
薛景涛
贺亚农
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP20835590.9A priority Critical patent/EP3974778B1/en
Publication of WO2021000630A1 publication Critical patent/WO2021000630A1/zh
Priority to US17/565,640 priority patent/US11896175B2/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4011Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/383Indoor data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3837Data obtained from a single source
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3848Data obtained from both position sensors and additional sensors
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3863Structures of map data
    • G01C21/387Organisation of map data, e.g. version management or database structures
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0219Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory ensuring the processing of the whole working surface
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/20Control system inputs
    • G05D1/24Arrangements for determining position or orientation
    • G05D1/246Arrangements for determining position or orientation using environment maps, e.g. simultaneous localisation and mapping [SLAM]
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/60Intended control result
    • G05D1/648Performing a task within a working area or space, e.g. cleaning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/809Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/653Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/04Automatic control of the travelling movement; Automatic obstacle detection

Definitions

  • This application relates to the field of intelligent control technology, and more specifically, to a method, device, and storage medium for updating a working map of a mobile robot.
  • SLAM simultaneous localization and mapping
  • the mobile robot Since mobile robots (especially home service robots) generally use consumer-grade sensors and drive systems, without external auxiliary positioning equipment, positioning accuracy and reliability are not high. Therefore, in order to reduce the impact of a single abnormal positioning of the mobile robot on the path planning, the mobile robot generally does not save the environment layout map, and rebuilds the map every time it works, and then performs path planning according to the newly created environment layout map (for example, every time you start The execution of the task is to conduct a rough detection of the surrounding environment, construct an environment layout map according to the detected object distribution information, and then perform path planning according to the environment layout map).
  • the environment layout map obtained by the above mapping method is generally not comprehensive enough to reflect the detailed layout of objects in the workplace.
  • This application provides a method, a device and a storage medium for updating the working map of a mobile robot, so as to obtain a map that can more comprehensively reflect the layout of the working environment of the mobile robot.
  • a method for updating the working map of a mobile robot includes: obtaining M environmental detection maps; fusing the M environmental detection maps to obtain an environment fusion map; and comparing the pixel values of the environment fusion map and The pixel values of the environment layout map are weighted to obtain an updated environment layout map.
  • the above M environmental detection maps are determined according to the object distribution information detected by the mobile robot during the movement, and M is an integer greater than 1.
  • the environment detection map may be determined based on the object distribution information detected by the mobile robot, and the environment layout map is generally obtained by fusing multiple environment detection maps. Both the environment detection map and the environment layout map can reflect the distribution of objects in the working environment. Compared with the environment detection map, the environment layout map can generally reflect the overall distribution or layout of objects in the working environment more comprehensively.
  • the above-mentioned M environmental detection maps can be directly fused, or the above-mentioned M environmental detection maps can be preliminarily prepared.
  • the fusion is performed after processing (for example, filtering processing, noise reduction processing), or after a certain screening of the above M environmental detection maps, the environmental detection maps obtained after screening are fused.
  • the above-mentioned environment layout map may be obtained by the mobile robot fusion according to multiple previously acquired environment detection maps.
  • a mobile robot when a mobile robot performs a task in a new workplace, it can first perform detection in the workplace, determine multiple environmental detection maps based on the distribution information of the detected objects, and then use the multiple environmental detection maps to determine Determine the environment layout map.
  • the environment layout map can be continuously updated and optimized according to the method of this application.
  • the foregoing fusion of M environmental detection maps to obtain an environment fusion map includes: filtering the M environmental detection maps to obtain M filtered environmental detection maps; after filtering the M The environment detection map is fused to obtain an environment fusion map.
  • the interference of small objects in the environment can be removed and the main layout of the environment can be preserved.
  • morphological filtering may be specifically used to extract line features of the environment detection map.
  • the foregoing environment layout map is an environment layout map of the first workplace currently saved by the mobile robot.
  • the mobile robot can replace the original environment layout map and save the updated environment layout map. Until the next time the map is updated, the environment layout map will be updated again according to the detected environment detection map.
  • the environment layout map saved by the mobile robot is updated through multiple detection maps obtained when the mobile robot is working, so that the updated environment layout map can reflect a more detailed environment layout situation. It is convenient for the mobile robot to perform work tasks better according to the updated environment layout map.
  • the mobile robot may be controlled to perform work tasks in the first place according to the updated environment layout map.
  • the mobile robot can better perform work tasks in the first place according to the updated environment layout map.
  • the aforementioned M is a preset value.
  • the above M may be a value set by the manufacturer in advance, or a value set by the user.
  • the above-mentioned value of M can also be flexibly set by the configuration information of the number of environment detection maps, and the number of environment detection maps to be acquired can be directly determined through the configuration information of the number of environment detection maps.
  • the above method further includes: obtaining configuration information of the number of environmental detection maps, where the configuration information of the number of environmental detection maps includes the value of M; and determining the value of M according to the configuration information of the number of environmental detection maps Numerical value.
  • the setting of M can be flexibly realized through the configuration information of the number of environmental detection maps.
  • the above-mentioned environmental detection map quantity configuration information may be information input by the user.
  • the user can input the above-mentioned environmental detection map quantity configuration information through the control interface of the control device, and the control device may be integrated in
  • the module in the mobile robot can also be a separate device independent of the mobile robot.
  • the above-mentioned value of M can be adjusted through the modification information of the environmental detection map quantity to increase or decrease the value of M.
  • the adjustment of the value of M can be realized by sending information about modifying the number of environment detection maps to the control device.
  • the weight corresponding to the pixel value of the environment fusion map is the first weight
  • the weight corresponding to the pixel value of the environment layout map is the second weight.
  • the size of the first weight and the second weight is determined according to the map update requirements of the mobile robot.
  • the map update requirement of the mobile robot may be the requirement of the mobile robot (when performing a task) on the speed (or frequency) of updating the environment layout map, or the requirement of the mobile robot (when performing a task) on the update amplitude of the environment layout map.
  • the first weight can be set to a larger value, and the second weight can be set to a smaller value.
  • the first weight can be set to a smaller value, and the second weight can be set to a larger value.
  • the first weight can be set to a larger value, and the second weight can be set to a smaller value.
  • the first weight can be set to a smaller value, and the second weight can be set to a larger value.
  • the environment layout map can be flexibly updated according to the map update requirements of the mobile robot.
  • the size of the first weight and the second weight are determined according to the map update requirements of the mobile robot, including: the first weight and the environment layout map update required by the mobile robot
  • the frequency is a positive correlation
  • the second weight is in an inverse correlation with the update frequency of the environment layout map required by the mobile robot.
  • the first weight when the map update frequency required by the mobile robot (when performing tasks) is high, the first weight can be set to a larger value, and the second weight can be set to a smaller value (for example, the first weight).
  • One weight value is set to 0.7, and the second weight value is set to 0.3).
  • the first weight when the mobile robot (when performing a task) requires a low map update frequency, the first weight can be set to a smaller value, and the second weight can be set to a larger value (for example, the first weight The value is set to 0.3, and the second weight value is set to 0.7).
  • the size of the first weight and the second weight are determined according to the map update requirements of the mobile robot, including: the first weight and the environment layout map update required by the mobile robot
  • the amplitude is a positive correlation
  • the second weight is in an inverse correlation with the update amplitude of the environment layout map required by the mobile robot.
  • the fusion of M environmental detection maps to obtain the current fusion environment fusion map includes: determining N environmental detection maps from the M environmental detection maps Map: Fusion of N environmental detection maps to obtain the current fusion environment fusion map.
  • the degree of agreement between any two environmental detection maps in the above N environmental detection maps is greater than or equal to the first threshold, and N is a positive integer less than or equal to M.
  • a more accurate environment fusion map can be obtained by selecting an environment detection map with better consistency from M environment detection maps for fusion.
  • the above method further includes: obtaining current object distribution information; determining the current environment detection map according to the current object distribution information; and the current environment detection map is consistent with the environment layout map
  • the control plane moves the human to recover from the abnormality
  • the above-mentioned current object distribution information is the object distribution information detected by the mobile robot within a preset interval or a preset distance
  • the preset interval is a period of time before the mobile robot reaches the current detection point
  • the preset distance is moving The distance the robot moves before reaching the current detection point.
  • the preset interval may be a period of time set manually, and the preset distance may be a distance set manually.
  • the aforementioned preset interval and preset distance can be flexibly set according to specific requirements.
  • the occurrence of an abnormality when the mobile robot detects the surrounding environment may mean that the mobile robot cannot detect and obtain accurate object distribution information due to a certain failure in the process of detecting the surrounding environment.
  • the mobile robot may be unable to detect accurate object distribution information due to positioning failures, sensor failures, or failures of some processing modules inside the mobile robot.
  • controlling the plane to move the human to perform abnormal recovery includes: controlling the mobile robot to retreat from the current detection point to the first detection point, which is the same as the current detection point The distance between is the preset distance; the mobile robot is controlled to re-detect the surrounding environment from the first detection point to re-acquire object distribution information.
  • the object distribution information obtained by the mobile robot can be ensured as much as possible by performing a rollback operation and restarting the detection of the surrounding environment to obtain object distribution information.
  • the accuracy of the information is very small.
  • controlling the planar mobile person to perform abnormal recovery includes: controlling the mobile robot to reset the operating system.
  • controlling the mobile robot to reset the operating system is equivalent to controlling the mobile robot to restart the system (similar to a computer restart).
  • controlling the plane mobile person to perform abnormal recovery includes: controlling the mobile robot to restart the sensor of the mobile robot.
  • the above control of the mobile robot to restart the sensor of the mobile robot may specifically refer to the control of the mobile robot to close and reopen the port of the sensor.
  • the sensors of the above-mentioned mobile robot may include lidar, encoder, gyroscope, ultrasonic sensor, infrared sensor, and so on.
  • the above method further includes: controlling the mobile robot to clear current object distribution information.
  • the above M environmental detection maps are all located in the same coordinate system.
  • the above M environmental detection maps are all maps located in a reference coordinate system, and the origin of the reference coordinate system can be located in any of the following positions: charging of the mobile robot The location of the seat; the parking position of the mobile robot after the task is completed; the location of the garbage transfer station matching the mobile robot.
  • the foregoing acquiring M environmental detection maps includes: acquiring a first environmental detection map in the M environmental detection maps, and the coordinate values of the grid points in the first environmental detection map are located at the first coordinates The coordinate value in the reference coordinate system; the coordinate value of the grid point in the first environment detection map is converted to the coordinate value in the reference coordinate system.
  • the first environment detection map may be any one of the above M environment detection maps, and the first environment detection map may be executed according to the i-th (1 ⁇ i ⁇ M, and i is an integer) times according to the mobile robot
  • the object distribution information detected during the work task is determined, and the origin of the first coordinate system may be determined according to the starting position when the mobile robot executes the i-th task.
  • the coordinate origin of the aforementioned first coordinate system may be the starting point when the mobile robot executes the i-th task (for example, it may be the center point of the starting position of the mobile robot executing the i-th task).
  • the above-mentioned M boundary detection maps are determined based on the object distribution information detected when the mobile robot performs a work task in the first workplace.
  • the M pieces of environmental detection information are determined by the object distribution information detected when the mobile robot performs a work task, and the object distribution information can be obtained while the mobile robot performs other tasks, which can improve work efficiency.
  • the above-mentioned M environmental detection maps are respectively determined based on M object distribution information, and the M object distribution information is the mobile robot under the first workplace.
  • Object distribution information obtained by detecting the surrounding environment when performing M tasks.
  • Any one of the above M environment detection maps is determined based on the object distribution information detected when the mobile robot executes a corresponding work task.
  • each environmental detection map can Reflect the distribution of objects in the first place as comprehensively as possible, so that the finally obtained updated environment layout map can more comprehensively reflect the distribution of objects in the first place.
  • the foregoing acquiring M environmental detection maps includes: obtaining the environmental detection maps according to the object distribution information obtained by detecting the surrounding environment when the mobile robot performs a work task in the first workplace, until M environmental detection maps are obtained map.
  • real-time statistics can be performed on the number of acquired environmental detection maps.
  • the environment fusion map can be determined based on the M environmental detection maps.
  • the acquisition of the environmental detection maps can also be continued.
  • the process in the method in the first aspect can be repeated. .
  • the foregoing obtaining an environment detection map according to the object distribution information obtained by detecting the surrounding environment when the mobile robot performs a work task in the first workplace until M pieces of environment detection maps are obtained including: according to the mobile robot in the first workplace An object distribution information obtained by detecting the surrounding environment when performing a work task in a workplace obtains an environment detection map until M environment detection maps are obtained.
  • a method for updating the working map of a mobile robot includes: obtaining M environment detection maps from the mobile robot; fusing the M environment detection maps to obtain the current fusion environment fusion map; The mobile robot obtains the environment layout map currently saved by the mobile robot; weights the pixel values of the environment fusion map obtained by the current fusion and the pixel values of the currently saved environment layout map to obtain the updated environment layout map; the updated environment The layout map is sent to the mobile robot.
  • the above M environment detection maps are determined according to the object distribution information detected by the mobile robot during the movement process, and M is an integer greater than 1.
  • the currently saved environment layout map is obtained by weighting the environment fusion map obtained by the last fusion and the environment layout map saved last time, and the environment fusion map obtained by the last fusion is based on the M images obtained last time.
  • the environment detection map is fused.
  • the method of the second aspect described above may be executed by a control device for controlling the operation of a mobile robot.
  • the environment layout map currently saved by the mobile robot can be updated, so that the updated environment layout map can reflect more detailed environment layout conditions. It is convenient for the mobile robot to perform work tasks better according to the updated environment layout map.
  • the foregoing acquiring M environment detection maps from a mobile robot includes: receiving the M environment detection maps from the mobile robot.
  • M environment detection maps are acquired from the mobile robot at one time (the mobile robot sends M environment detection maps). Map), or after each environment detection map generated by the mobile robot, an environment detection map is obtained from the mobile robot (the mobile robot sends an environment detection map after each generation of an environment detection map).
  • the above method further includes: before fusing the M environmental detection maps, the above method further includes: determining whether to obtain M environmental detection maps.
  • the number of environmental detection maps should be counted. When the number of acquired environmental detection maps reaches M, then the M environmental detection maps are fused.
  • the above method further includes: obtaining current object distribution information from the mobile robot; determining the current environment detection map according to the current object distribution information; detecting the map in the current environment and the currently saved When the degree of consistency of the environment layout map is less than the second threshold, it is determined that an abnormality occurs when the mobile robot detects the surrounding environment; an abnormal recovery instruction is sent to the mobile robot.
  • the current object distribution information is the object distribution information detected by the mobile robot within a preset interval or a preset distance
  • the preset interval is a period of time before the mobile robot reaches the current detection point
  • the preset distance is the mobile robot The distance traveled before reaching the current detection point.
  • the aforementioned abnormal recovery instruction is used to instruct the mobile robot to perform abnormal recovery. After receiving the abnormal recovery instruction, the mobile robot performs abnormal recovery in response to the abnormal recovery instruction.
  • the above-mentioned abnormal recovery includes: sending a retreat instruction to the mobile robot; sending a re-detection instruction to the mobile robot.
  • the aforementioned retreat instruction is used to instruct the mobile robot to retreat from the current detection point to the first detection point, and the distance between the first detection point and the current detection point is a preset distance. After the mobile robot receives the retreat instruction, in response to the retreat instruction, it retreats from the current detection point to the first detection point.
  • the above-mentioned re-detection instruction is used to instruct the mobile robot to re-detect the surrounding environment from the first detection point to re-acquire object distribution information. After receiving the re-detection instruction, the mobile robot re-detects the surrounding environment from the first detection point in response to the re-detection instruction to re-acquire object distribution information.
  • the above-mentioned abnormal recovery includes: sending a restart instruction to the mobile robot.
  • the above restart instruction is used to instruct the mobile robot to restart and re-detect the surrounding environment after restarting. After the mobile robot receives the restart instruction, the mobile robot restarts in response to the restart instruction, and re-detects the surrounding environment after the restart.
  • the foregoing restart instruction can instruct the mobile robot to reset the operating system, or instruct the mobile robot to restart the corresponding sensor.
  • the restart of the operating system is similar to the restart of the computer system.
  • the restart of the sensor can specifically close and reopen the port of the sensor.
  • the sensors of the above-mentioned mobile robot may include lidar, encoder, gyroscope, ultrasonic sensor, infrared sensor, and so on.
  • the above method further includes: sending a clear instruction to the mobile robot.
  • the above-mentioned clearing instruction is used to clear the current object distribution information. After the mobile robot receives the clearing instruction, it clears the current object distribution information in response to the clearing instruction.
  • the above M environmental detection maps are all located in the same coordinate system.
  • the above-mentioned M environment detection maps are located in the same coordinate system, the above-mentioned M environment detection maps are more accurately fused to obtain a more accurate environment fusion map.
  • the above M environmental detection maps are all maps located in a reference coordinate system, and the origin of the reference coordinate system can be located in any of the following positions: charging of the mobile robot The location of the seat; the parking position of the mobile robot after the task is completed; the location of the garbage transfer station matching the mobile robot.
  • the center point of any one of the above positions or the point of other specific positions can be used as the origin of the above-mentioned reference coordinate system.
  • a method for updating the working map of a mobile robot includes: detecting the surrounding environment during movement to obtain environment layout information; and determining an environment detection map based on the environment layout information detected in one detection period Maps; send M environmental detection maps to the control device; receive updated environmental layout maps from the control device.
  • the updated environment layout map is obtained by the control device by weighting the pixel values of the currently fused environment fusion map and the currently saved environment layout map.
  • the M pieces of environmental detection maps are fused.
  • M environment detection maps are determined according to the object distribution information detected by the mobile robot during the movement, and M is an integer greater than 1.
  • the currently saved environment layout map is obtained by weighting the environment fusion map obtained by the last fusion and the environment layout map saved last time, and the environment fusion map obtained by the last fusion is based on the M environment detections obtained last time Maps are fused.
  • the method of the third aspect described above can be executed by a mobile robot.
  • the mobile robot obtains M detection maps while working and sends M environmental detection maps to the control device, so that the control device can update the environment layout map currently saved by the mobile robot according to the M environmental detection maps , So that the updated environment layout map can reflect more detailed environment layout conditions. It is convenient for the mobile robot to perform work tasks better according to the updated environment layout map.
  • the mobile robot sends M environment detection maps to the control device, either after the mobile robot generates all the M environment detection maps, then the M environment detection maps are sent to the control device, or the mobile robot sends the M environment detection maps to the control device. After each environment detection map is generated, an environment detection map is sent to the control device until M environment detection maps are sent to the mobile robot.
  • the above method further includes: sending current object distribution information to the control device, so that the control device can determine the current environment detection map according to the current object distribution information; In response to the exception recovery instruction, perform exception recovery in response to the exception recovery instruction.
  • the current object distribution information is the object distribution information detected by the mobile robot within a preset interval or a preset distance
  • the preset interval is a period of time before the mobile robot reaches the current detection point
  • the preset distance is the mobile robot The distance traveled before reaching the current detection point.
  • the above abnormal recovery instruction is generated by the control device when the degree of agreement between the current environment detection map and the currently saved environment layout map is less than the second threshold, and the degree of agreement between the current environment detection map and the currently saved environment layout map is less than the second threshold.
  • the control device determines that an abnormality occurs when the mobile robot detects the surrounding environment, and sends an abnormal recovery instruction to the mobile robot.
  • the above-mentioned abnormal recovery instruction is used to instruct the mobile robot to perform abnormal recovery. After receiving the abnormal recovery instruction, the mobile robot performs abnormal recovery in response to the abnormal recovery instruction.
  • receiving an abnormal recovery instruction sent by the control device, and performing abnormal recovery in response to the abnormal recovery instruction includes: receiving a rollback instruction and a re-detection instruction sent by the control device; In response to the retreat instruction, retreat from the current detection point to the first detection point; in response to the re-detection instruction, re-detect the surrounding environment from the first detection point to reacquire object distribution information.
  • the aforementioned rollback instruction and re-detection instruction may be two specific instructions in the abnormal recovery instruction.
  • the aforementioned abnormal recovery instruction may also include a restart instruction, which is used to instruct the mobile robot to restart and re-detect the surrounding environment after the restart. After the mobile robot receives the restart instruction, the mobile robot restarts in response to the restart instruction, and re-detects the surrounding environment after the restart.
  • a restart instruction which is used to instruct the mobile robot to restart and re-detect the surrounding environment after the restart.
  • the foregoing restart instruction can instruct the mobile robot to reset the operating system, or instruct the mobile robot to restart the corresponding sensor.
  • the restart of the operating system is similar to the restart of the computer system.
  • the restart of the sensor can specifically close and reopen the port of the sensor.
  • the sensors of the above-mentioned mobile robot may include lidar, encoder, gyroscope, ultrasonic sensor, infrared sensor, and so on.
  • the above method further includes: receiving a clearing instruction sent by the control device, and clearing the current object distribution information in response to the clearing instruction.
  • the above-mentioned M environment detection maps are located in the same coordinate system, the above-mentioned M environment detection maps are more accurately fused to obtain a more accurate environment fusion map.
  • the above M environmental detection maps are all maps located in a reference coordinate system, and the origin of the reference coordinate system can be located in any of the following positions: charging of the mobile robot The location of the seat; the parking position of the mobile robot after the task is completed; the location of the garbage transfer station matching the mobile robot.
  • the center point of any one of the above positions or the point of other specific positions can be used as the origin of the above-mentioned reference coordinate system.
  • a method for updating the work map of a mobile robot includes: detecting the surrounding environment during movement to obtain environment layout information; and determining an environment detection map based on the environment layout information detected in one detection period Map; In the case of obtaining M environmental detection maps, merge the M environmental detection maps to obtain the current fusion environment fusion map; the pixel values of the current fusion environment fusion map and the currently saved environment layout The pixel values of the map are weighted to obtain the updated environment layout map.
  • the currently saved environment layout map is obtained by weighting the environment fusion map obtained by the last fusion and the environment layout map saved last time, and the environment fusion map obtained by the last fusion is based on the M pieces obtained last time.
  • the environment detection map is fused.
  • the method of the fourth aspect described above can be executed by a mobile robot.
  • the environment layout map currently saved by the mobile robot can be updated, so that the updated environment layout map can reflect more detailed environment layout conditions. It is convenient for the mobile robot to perform work tasks better according to the updated environment layout map.
  • the above method further includes: before fusing the M environmental detection maps, the above method further includes: determining whether M environmental detection maps are acquired.
  • the number of environmental detection maps can be counted to determine whether the number of environmental detection maps that have been generated has reached M.
  • the above method further includes: determining the current environment detection map according to the current object distribution information; the consistency of the current environment detection map and the currently saved environment layout map is less than the second In the case of the threshold, it is determined that the mobile robot is abnormal when detecting the surrounding environment; and the abnormality is recovered.
  • the current object distribution information is the object distribution information detected by the mobile robot within a preset interval or a preset distance
  • the preset interval is a period of time before the mobile robot reaches the current detection point
  • the preset distance is that the mobile robot reaches the current detection point. The distance moved before the detection point.
  • performing anomaly recovery includes: retreating from the current detection point to the first detection point, and the distance between the first detection point and the current detection point is a preset distance ; Re-detect the surrounding environment from the first detection point to reacquire object distribution information.
  • the mobile robot may retreat from the current detection point to the first detection point by controlling the mobile robot motion platform.
  • the mobile robot can re-detect the surrounding environment by controlling the sensor to regain the object distribution information.
  • the above method further includes: clearing current object distribution information.
  • the mobile robot can erase the current object distribution information stored in the storage module.
  • performing the above-mentioned abnormal recovery includes: performing a restart operation.
  • the above restart operation can be either to reset the operating system or to restart the corresponding sensor.
  • the restart of the operating system is similar to the restart of the computer system.
  • the restart of the sensor can specifically close and reopen the port of the sensor.
  • the sensors of the above-mentioned mobile robot may include lidar, encoder, gyroscope, ultrasonic sensor, infrared sensor, and so on.
  • an apparatus for updating a work map of a mobile robot including modules for executing the method in the first aspect or the second aspect.
  • a mobile robot in a sixth aspect, includes each module used to execute the method in the third aspect or the fourth aspect.
  • an apparatus for updating a work map of a mobile robot includes: a memory for storing a program; a processor for executing the program stored in the memory, and when the program stored in the memory is executed , The processor is configured to execute the method in the above first aspect.
  • a device for updating a work map of a mobile robot includes: a transceiver; a memory for storing programs; a processor for executing programs stored in the memory; When executed, the transceiver and the processor are used to execute the method in the above second aspect.
  • a mobile robot which includes: a transceiver; a memory for storing programs; a processor for executing programs stored in the memory, and when the programs stored in the memory are executed, The transceiver and the processor are used to execute the method in the above third aspect.
  • a mobile robot in a tenth aspect, includes a memory for storing a program; a processor for executing the program stored in the memory, and when the program stored in the memory is executed, the processing
  • the device is used to execute the method in the above fourth aspect.
  • a computer storage medium stores program code
  • the program code includes a method for executing any one of the first, second, third, and fourth aspects The instructions in the steps.
  • the aforementioned storage medium may specifically be a non-volatile storage medium.
  • a computer program product containing instructions is provided.
  • the computer program product runs on a computer, the computer executes any of the first, second, third, and fourth aspects above Method in.
  • a chip in a thirteenth aspect, includes a processor and a data interface.
  • the processor reads instructions stored in a memory through the data interface and executes the first, second, and third aspects described above. And the method in any one of the fourth aspects.
  • the chip may further include a memory in which instructions are stored, and the processor is configured to execute instructions stored on the memory.
  • the processor is configured to execute the method in any one of the first aspect, the second aspect, the third aspect, and the fourth aspect.
  • the aforementioned chip may specifically be a field programmable gate array FPGA or an application specific integrated circuit ASIC.
  • Figure 1 is a schematic diagram of the sweeping robot
  • FIG. 2 is a schematic flowchart of a method for updating a working map of a mobile robot according to an embodiment of the present application
  • Figure 3 is a schematic diagram of an environmental detection map
  • Figure 4 is a schematic diagram of an environmental detection map after filtering processing
  • Figure 5 is a schematic diagram of an environment layout map
  • Figure 6 is a schematic diagram of the process of obtaining M environmental detection maps
  • FIG. 7 is a schematic diagram of a process of obtaining object distribution information in an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a method for updating a working map of a mobile robot according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of the abnormal recovery process of the method for updating the working map of a mobile robot according to an embodiment of the present application.
  • FIG. 10 is a schematic flowchart of a method for updating a work map of a mobile robot according to an embodiment of the present application
  • FIG. 11 is a schematic diagram of the process of updating the work map of the cleaning robot during the cleaning task
  • FIG. 12 is a schematic block diagram of an apparatus for updating a working map of a mobile robot according to an embodiment of the present application
  • FIG. 13 is a schematic block diagram of an apparatus for updating a working map of a mobile robot according to an embodiment of the present application
  • FIG. 14 is a schematic block diagram of a mobile robot according to an embodiment of the present application.
  • Fig. 15 is a schematic block diagram of a mobile robot according to an embodiment of the present application.
  • Fig. 16 is a schematic block diagram of an apparatus for updating a working map of a mobile robot according to an embodiment of the present application
  • Fig. 17 is a schematic block diagram of a mobile robot according to an embodiment of the present application.
  • the mobile robot in this application may be a robot that can move and perform certain tasks in indoor environments (for example, homes, shopping malls, and factory floors).
  • the mobile robot may include a sweeping robot, a handling robot, and so on.
  • Figure 1 shows a common sweeping robot, which can be charged through a charging stand.
  • mobile robots When performing tasks, mobile robots generally first obtain the distribution of objects in the corresponding workplace, that is, first obtain the environment layout map under the workplace, and then perform specific tasks according to the environment layout map. However, because the distribution of objects in the same workplace may change, in order to make the mobile robot better perform work tasks, the work map of the mobile robot needs to be updated.
  • the method for updating the work map of a mobile robot according to an embodiment of the present application will be described in detail below with reference to FIG. 2.
  • Fig. 2 is a schematic flowchart of a method for updating a work map of a mobile robot according to an embodiment of the present application.
  • the method shown in FIG. 2 may be executed by a control device of the mobile robot, and the control device is used to update the work map of the mobile robot.
  • the control device can be a control module located inside the mobile robot, or an independent device located outside the mobile robot.
  • control device When the above-mentioned control device is an independent device located outside the mobile robot, the control device may be an electronic device, and the electronic device may specifically be a mobile terminal (for example, a smart phone), a tablet computer, a notebook computer, an augmented reality/virtual Realistic devices and wearable devices, etc.
  • a mobile terminal for example, a smart phone
  • tablet computer for example, a tablet computer
  • notebook computer for example, a notebook computer
  • augmented reality/virtual Realistic devices and wearable devices etc.
  • the method shown in FIG. 2 includes steps 1001 to 1003, which are respectively described in detail below.
  • the foregoing M environmental detection maps may be determined by the object distribution information detected by the mobile robot during the movement.
  • the mobile robot detects surrounding objects to obtain object distribution information.
  • the above-mentioned acquiring M environment detection maps includes: controlling the mobile robot to detect surrounding objects to obtain object distribution information in the first place; and determining M environment detection maps according to the object distribution information in the first place.
  • the environment detection map acquired in the above step 1001 may be as shown in FIG. 3.
  • the environment detection map displays contour lines or boundary lines of objects in the home environment, and the environment detection map is determined based on the object distribution information obtained when the mobile robot detects surrounding objects in the home environment.
  • the above-mentioned value of M is preset.
  • the specific value of M can be set by the manufacturer before leaving the factory, or set by the user before the detection of the mobile robot.
  • the above-mentioned value of M can also be flexibly set through the configuration information of the number of environmental detection maps.
  • the method shown in FIG. 1 further includes: obtaining configuration information of the number of environmental detection maps, the configuration information of the number of environmental detection maps includes the value of M; determining the value of M according to the configuration information of the number of environmental detection maps .
  • the configuration information of the number of environment detection maps can flexibly realize the setting of M, which is convenient to obtain a corresponding number of environment detection maps as required.
  • the above-mentioned environmental detection map quantity configuration information may be information input by the user.
  • the user can input the above-mentioned environmental detection map quantity configuration information through the control interface of the control device.
  • the configuration information of the number of environmental detection maps includes the specific value of M.
  • the above-mentioned value of M can also be adjusted by modifying the information of the number of environmental detection maps.
  • the user can input the environmental detection map quantity modification information to the control device.
  • the control device can adjust the value of M according to the environmental detection map quantity modification information.
  • the foregoing M boundary detection maps are determined based on the object distribution information detected when the mobile robot performs a work task in the first workplace.
  • the M pieces of environmental detection information are determined by the object distribution information detected when the mobile robot performs a work task, and the object distribution information can be obtained while the mobile robot performs other tasks, which can improve work efficiency.
  • the above M environmental detection maps are respectively determined based on M object distribution information, and the M object distribution information is obtained by detecting the surrounding environment when the mobile robot performs M work tasks in the first workplace.
  • Object distribution information is obtained by detecting the surrounding environment when the mobile robot performs M work tasks in the first workplace.
  • any one of the environmental detection maps is determined based on the object distribution information detected when the mobile robot performs a corresponding work task.
  • the i-th environmental detection map in the M environmental detection maps may be determined by the object distribution information obtained by the mobile robot detecting the surrounding environment during the j-th work task, where i and j are both positive integers, And 1 ⁇ i ⁇ M, 1 ⁇ j ⁇ M, i and j may be the same or different.
  • each environmental detection map can Reflect the distribution of objects in the first place as comprehensively as possible, so that the finally obtained updated environment layout map can more comprehensively reflect the distribution of objects in the first place.
  • the foregoing acquiring M environmental detection maps includes: obtaining the environmental detection maps according to the object distribution information obtained by detecting the surrounding environment when the mobile robot performs a work task in the first workplace, until M environmental detection maps are obtained map.
  • environment detection maps can be acquired one by one until the number of environment detection maps reaches M.
  • real-time statistics can be performed on the number of acquired environmental detection maps.
  • the environment fusion map can be determined based on the M environmental detection maps.
  • the acquisition of the environmental detection maps can also be continued.
  • the M environmental detection maps are acquired again, the environment is determined again based on the acquired M environmental detection maps. Fusion map.
  • the mobile robot can detect the surrounding environment through its own sensors or detectors to obtain distribution information of surrounding objects.
  • the aforementioned sensor or detector may specifically include at least one of a camera (specifically, a depth camera), an infrared sensor, a ranging radar, and an ultrasonic sensor.
  • an environmental detection map set C (hereinafter referred to as set C) in the process of updating the map.
  • the set C is used to save the acquired environmental detection map.
  • the set C is When the number of environment detection maps reaches M, all the M environment detection maps in the set C can be written into the environment layout map update set D (hereinafter referred to as set D), and save the set C
  • the environment detection map of is cleared (the environment detection map obtained next can be saved in the set C), so that the set of M environment detection maps in the set D is used to determine the environment fusion map.
  • the abnormal environment detection maps in the set D can be eliminated, and the environment layout map update set Ds can be obtained.
  • the environment detection maps in the set Ds can be used to determine the environment fusion map.
  • the M environmental detection maps when the M environmental detection maps are fused (specifically, it can be superimposed) to obtain the environment fusion map, either the above M environmental detection maps can be directly integrated, or the above M environmental detection maps can be directly integrated.
  • certain preprocessing for example, filtering processing, noise reduction processing
  • fusion is performed, or after a certain screening of the above M environmental detection maps, the environmental detection maps obtained after screening are fused.
  • step 1002 There may be two different ways to determine the environment fusion map in step 1002, and the two ways will be described in detail below.
  • the first method directly fusion of M environmental detection maps to obtain the current fusion environment fusion map.
  • the pixel values of the M environment detection maps may be averaged, and the obtained average pixel value is used as the pixel value of the environment fusion map.
  • the environment fusion map can be determined more conveniently and quickly.
  • the second method is to determine the environmental detection maps with good consistency from M environmental detection maps and then merge them.
  • the M environmental detection maps are fused to obtain the current fusion environment fusion map, which specifically includes: determining N environmental detection maps from the M environmental detection maps; the N environmental detection maps The map is fused to obtain the environment fusion map obtained by the current fusion.
  • the degree of agreement between any two environmental detection maps in the above N environmental detection maps is greater than or equal to the first threshold, and N is a positive integer less than or equal to M.
  • the foregoing first threshold may be a preset threshold, and the size of the first threshold may be set as required.
  • the first threshold can be set to a larger value, and when the robot is working with low requirements for the map accuracy, the first threshold can be set to a Smaller value.
  • the above-mentioned first threshold may be set to 0.7.
  • the degree of agreement between the two environmental detection maps can be expressed by the matching degree of the grid between the two environmental detection maps. If the grid matching degree of the two environmental detection maps is higher, then the two environmental detection maps The consistency between the maps is higher.
  • the degree of agreement between any two environmental detection maps in the M detection maps can be calculated by the above formulas (1) and (2), and the degree of agreement can be selected to meet the requirements (the degree of agreement may be greater than a certain threshold) Environmental detection map.
  • the fusion process of the aforementioned N environmental detection maps is similar to the aforementioned fusion process of M environmental detection maps, and will not be described in detail here.
  • the second method by selecting the environment detection maps with good consistency from M environment detection maps for fusion, generally a more accurate environment fusion map can be obtained.
  • the third way preprocess the M environmental detection maps first, and then fuse the preprocessed M environmental detection maps to obtain the current fusion environment fusion map.
  • the M environmental detection maps are fused to obtain the current fusion environment fusion map, which specifically includes: preprocessing the M environmental detection maps to obtain preprocessed M environmental detection maps; The preprocessed M environmental detection maps are fused to obtain the environmental fusion map obtained by the current fusion.
  • the preprocessing of M environmental detection maps can be that when an environmental detection map of the M detection maps appears in an image area that is significantly different from the image content of other environmental detection maps, the image in the image area Content deleted.
  • M environmental detection maps include a first environmental detection map
  • the image content of area A of the first environmental detection map is compatible with other environmental detection maps in the M environmental detection maps (except for the first environmental detection map in the M environmental detection maps).
  • the environment detection map outside the detection map) then the image content of the A area in the first environment detection map can be deleted or cleared to obtain the first environment detection map after preprocessing.
  • the degree of agreement between any two environmental detection maps in the preprocessed M environmental detection maps may be greater than or equal to a third threshold
  • the third threshold may be a preset threshold
  • the magnitude of the third threshold may be based on Actually need to be set.
  • the third threshold can be set to a larger value
  • the robot is working with low requirements for the accuracy of the map
  • the third threshold can be set to a Smaller value.
  • the third threshold in the third manner and the first threshold in the second manner may be the same or different.
  • the third method by removing the image content of the M environment detection maps that are significantly different from other environment detection maps, and then performing image fusion, a more accurate environment fusion map can generally be obtained.
  • the M environment layout maps may be filtered or reduced first, and then the subsequent processing may be performed.
  • the image may be filtered first, and then the environment fusion map is determined according to the filtered environment detection map.
  • the above determining the environment fusion map according to the M environment detection maps includes: filtering the M environment detection maps to obtain M environment detection maps after filtering processing; and performing filtering processing on the M environment detection maps after filtering processing Perform fusion to obtain the environment fusion map obtained by the current fusion.
  • the filtering process here may specifically be morphological filtering to extract line features of the image.
  • the interference of small objects in the environment can be removed and the main layout of the environment can be preserved.
  • the environment detection map before the filtering process may be as shown in FIG. 3, and the environment detection map shown in FIG. 3 may be filtered to obtain the environment detection map shown in FIG. 4.
  • the environment detection map in Figure 4 has less image noise and can show the main layout of the environment.
  • step 1002 M environmental detection maps are saved in set C, and the M environmental detection maps are written into set D.
  • the abnormal maps in set D are removed to obtain the set Ds, there are N environmental detection maps in the set Ds.
  • the following steps can be performed.
  • Step A Obtain the map boundaries (x min , x max ) and (y min , y max ) of the environment fusion map according to the set Ds.
  • Step B Determine the environment fusion map according to the set Ds.
  • the size of the environment fusion map can be determined according to the map boundary of the environment fusion map, and then the image origin is created with (x min , y min ) as the image origin (the image origin is located at the upper left corner of the environment fusion map)
  • the environment fusion map, and the grid points in the N detection maps in the set Ds are sequentially projected into the environment fusion map, and the pixel points of the environment fusion map are taken as the value of the pixel points of the N detection maps in the set Ds The collection of pixel values.
  • the environment fusion map can be determined according to formula (5).
  • the environment fusion map can be obtained.
  • the environment fusion map can be denoted as Mpre.
  • the filtered environment fusion map can be obtained, denoted as Mobs.
  • the above environment layout map is an environment layout map of the first workplace currently saved by the mobile robot.
  • the mobile robot can replace the original environment layout map and save the updated environment layout map.
  • the environment layout map will be updated again according to the detected environment detection map.
  • the environment layout map in step 1003 or the updated environment layout map may be as shown in FIG. 5, and the environment layout map in FIG. 5 shows the layout of objects in a home environment.
  • the environment layout map saved by the mobile robot is updated through multiple detection maps obtained when the mobile robot is working, so that the updated environment layout map can reflect a more detailed environment layout situation. It is convenient for the mobile robot to perform work tasks better according to the updated environment layout map.
  • the weight corresponding to the pixel values of the environment fusion map obtained by the current fusion is the first weight
  • the pixels of the currently saved environment layout map The weight corresponding to the value is the second weight, where the size of the first weight and the second weight can be determined according to the map update requirements of the mobile robot.
  • the map update requirement of the mobile robot may be the requirement of the mobile robot (when performing a task) on the speed (or frequency) of updating the environment layout map, or the requirement of the mobile robot (when performing a task) on the update amplitude of the environment layout map.
  • the first weight can be set to a larger value, and the second weight can be set to a smaller value.
  • the first weight can be set to a smaller value, and the second weight can be set to a larger value.
  • the first weight can be set to a larger value, and the second weight can be set to a smaller value.
  • the first weight can be set to a smaller value, and the second weight can be set to a larger value.
  • the environment layout map can be flexibly updated according to the map update requirements of the mobile robot.
  • the size of the first weight and the second weight are determined according to the map update requirements of the mobile robot, including: the first weight and the environment layout map update frequency required by the mobile robot are positively correlated, and the second weight is related to the mobile robot.
  • the update frequency of the environment layout map required by the robot is an anti-correlation relationship.
  • the first weight when the map update frequency required by the mobile robot (when performing tasks) is high, the first weight can be set to a larger value, and the second weight can be set to a smaller value (for example, the first weight).
  • One weight value is set to 0.7, and the second weight value is set to 0.3).
  • the first weight when the mobile robot (when performing a task) requires a low map update frequency, the first weight can be set to a smaller value, and the second weight can be set to a larger value (for example, the first weight The value is set to 0.3, and the second weight value is set to 0.7).
  • the size of the first weight and the second weight are determined according to the map update requirements of the mobile robot, including: the first weight and the map update amplitude of the environment layout required by the mobile robot are positively correlated, and the second weight is related to the mobile robot.
  • the update range of the environment layout map required by the robot is an anti-correlation relationship.
  • first weight and second weight may also be preset.
  • the above-mentioned first weight and second weight are set by the user.
  • the user can change the values of the first weight and the second weight through the control interface.
  • the user can also indirectly implement the setting of the first weight and the second weight by setting other parameters.
  • the user can indirectly implement the setting of the first weight and the second weight by setting the map update frequency parameter. .
  • the environment layout map can be flexibly updated according to the map update requirements of the mobile robot.
  • the environment layout map in the above step 1003 can be denoted as Mr.
  • the map Mr can be updated according to formula (6) to obtain the updated environment layout map.
  • M r_new ⁇ *M r_old +(1- ⁇ )*M obs (6)
  • M r_old map showing the layout of the environment M r_new map showing the layout of the updated environment
  • M obs represents the filtered environment map fusion.
  • [alpha] is M r_old weight (second weight corresponding to the above weight)
  • (1- ⁇ ) M obs is the weight (first weight corresponding to the above weight).
  • the value [alpha] specifically be 0.7, so that when fused, M r_old weights 0.7, right weight 0.3 M obs.
  • the value of the pixel in Mr_new can be compared with the grid threshold. If the pixel value is greater than the threshold (the threshold can be set to 180 in this application), then the point is the environment The occupied point, otherwise it is considered a free point.
  • the environmental layout map can actually be regarded as a probability map.
  • the pixel value of the environmental layout map ranges from 0 to 255, where 0 indicates that the grid points/pixels are occupied (with obstacles). ), 255 indicates that the grid point is free (no obstacles), and other values indicate the probability of being free (the closer the value is to 255, the greater the probability of being free).
  • each of the M environment detection maps may be determined based on the object distribution information detected when the mobile robot performs a task.
  • the process of determining an environmental detection map based on the distribution information of objects detected during a mission the following takes Fig. 6 as an example for detailed description.
  • Fig. 6 is a schematic diagram of the process of obtaining M environmental detection maps.
  • the process shown in FIG. 6 includes steps 1001a to 1001f, which can be regarded as a refinement or specific implementation of the above step 1001, and these steps are described in detail below.
  • the mobile robot can use its own detector or sensor (the detector or sensor) under the control of the control device (the control device can be a separate control device or located in the control module of the mobile robot content).
  • the control device can be a camera, an infrared sensor, a ranging radar, and an ultrasonic sensor) to detect the surrounding environment (or surrounding objects) to obtain object distribution information reflecting the distribution of surrounding objects.
  • the mobile robot itself can determine whether the current task is completed. For example, take a sweeping robot as an example to determine whether the current task is completed. Specifically, you can determine whether the area currently being cleaned meets the preset requirements, or whether the cleaning time has reached the preset requirements, if the area currently cleaned meets the preset requirements, Or the current cleaning time reaches the preset requirement, then you can confirm the completion of the current task.
  • step 1001d is executed to determine an environmental detection map; if it is determined in step 1001c that the current task has not been executed, then step 1001b can be executed again.
  • the mobile robot can send the object distribution information obtained during the execution of the current task to the control device, which is controlled by the control device (the control device can be a separate control device or located in the content of the mobile robot).
  • the control module of determines an environment detection map according to the object distribution information.
  • step 1001f When it is determined in step 1001e that the number of environmental detection maps reaches M, step 1001f is executed, that is, M environmental detection maps are acquired; when it is determined in step 1001e that the number of environmental detection maps has not reached M, step 1001b is executed again.
  • control device may count the number of acquired environment detection maps, and when the number of environment detection maps does not reach M, continue to control the mobile robot to obtain object distribution information until M environment detection maps are acquired.
  • an environment detection map may be determined based on (all) object distribution information acquired during each execution of the task.
  • one or more environmental detection maps can be determined based on all the object distribution information acquired during each task execution.
  • an environment detection map may also be determined based on (all) object distribution information acquired during the process of executing multiple tasks by the mobile robot.
  • an environmental detection map can be determined based on the distribution information of all the objects acquired during one work task performed by the mobile robot, or it can be determined based on the information obtained during one work task performed by the mobile robot.
  • Part of the object distribution information may also be determined based on part of the object distribution information obtained during the process of the mobile robot performing the work task multiple times (twice or more).
  • the real-time environment detection map corresponding to the preset time can be obtained according to the object distribution information obtained by the mobile robot within a certain preset time, and then the real-time environment detection map is compared with the environment layout map currently saved by the mobile robot. If the real-time environment If the detection map is significantly different from the currently saved environment layout map, it may be caused by a positioning fault or abnormality that makes the object distribution information acquired in the current period of time not accurate enough. In this case, the object distribution information acquired in the current period of time can be discarded, so as to ensure the accuracy of the acquired M environmental detection maps.
  • the object information that is likely to be obtained due to abnormal positioning can be eliminated, so that the obtained object distribution information is more accurate.
  • Fig. 7 is a schematic diagram of a process of obtaining object distribution information in an embodiment of the present application.
  • the process shown in FIG. 7 includes steps 2001 to 2007, and these steps may occur in the process of obtaining M environmental detection maps in step 1001.
  • the steps 2001 to 2007 are described in detail below.
  • Step 2001 represents the start of acquiring the current detection map.
  • the above-mentioned current object distribution information is the object distribution information detected by the mobile robot within a preset interval or a preset distance.
  • the preset interval is a period of time before the mobile robot reaches the current detection point
  • the preset distance is the arrival of the mobile robot. The distance moved before the current detection point.
  • the preset interval may be a period of time set manually, and the preset distance may be a distance set manually.
  • the aforementioned preset interval and preset distance can be flexibly set according to specific requirements.
  • the current environment detection map in step 2003 can be obtained in the following two ways.
  • Method A Determine according to the object distribution information acquired by the mobile robot within a preset period of time.
  • the current environment detection map can be determined according to the object distribution information acquired by the mobile robot every 5 minutes of work.
  • Method B Determine according to the object distribution information acquired by the mobile robot within a preset distance.
  • the current environment detection map can be determined according to the object distribution information obtained by the mobile robot every 5 meters.
  • the object distribution information obtained by moving the mobile robot at any other distance may also be used to determine the current environment detection map.
  • step 2004 When it is determined in step 2004 that the consistency between the current environment detection map and the environment layout map is less than the second threshold, it means that the difference between the current environment detection map and the environment layout map may be relatively large, which may be due to inaccurate positioning or sensor failure. Or the failure of other modules causes the acquired object distribution information to be inaccurate, which in turn leads to the current environment detection map being not accurate enough. Therefore, the environment detection map needs to be reacquired, that is, step 2005 is executed.
  • step 2004 When it is determined in step 2004 that the degree of consistency between the current environment detection map and the environment layout map is greater than or equal to the second threshold, it means that the difference between the current environment detection map and the environment layout map may be relatively small, and the mobile robot can be controlled to continue detection. , That is, re-execute step 2002.
  • the foregoing second threshold may be a preset threshold.
  • the size of the second threshold may be related to the accuracy of the environment layout map that is finally required. When the accuracy of the environment layout map that is finally required is high, the second threshold may be set When the accuracy of the final environment layout map is low, the second threshold can be set to a smaller value.
  • the second threshold here can be specifically set to 0.6.
  • the occurrence of an abnormality when the mobile robot detects the surrounding environment may mean that the mobile robot cannot detect and obtain accurate object distribution information due to a certain failure in the process of detecting the surrounding environment.
  • the mobile robot may be unable to detect accurate object distribution information due to positioning failures, sensor failures, or failures of some processing modules inside the mobile robot.
  • Exception recovery method 1 Perform a rollback operation to regain the object distribution information.
  • the mobile robot can be controlled to retreat from the current detection point to the first detection point; the mobile robot can be controlled to re-detect the surrounding environment from the first detection point to reacquire object distribution information.
  • the distance between the first detection point and the current detection point is a preset distance
  • the preset distance may be a distance set manually, and the specific value of the distance may be flexibly set based on experience.
  • the object distribution information can be re-acquired, which facilitates subsequent determination of the environment detection map based on the obtained object distribution information.
  • Abnormal recovery method 2 Control the mobile robot to reset the operating system.
  • controlling the mobile robot to reset the operating system is equivalent to controlling the mobile robot to restart the system (similar to the restart of the computer).
  • Abnormal recovery method 3 Control the mobile robot to restart the sensor of the mobile robot.
  • the above control of the mobile robot to restart the sensor of the mobile robot may specifically refer to the control of the mobile robot to close and reopen the port of the sensor.
  • the sensors of the above-mentioned mobile robot may include lidar, encoder, gyroscope, ultrasonic sensor, infrared sensor, and so on.
  • the method shown in FIG. 6 may further include a step 2007, by performing step 2007 to eliminate inaccurate object distribution information.
  • step 2002 can be continued to obtain current object distribution information.
  • the process shown in Figure 7 can occur when the mobile robot obtains object distribution information.
  • the difference in object distribution information obtained due to abnormal positioning can be eliminated as much as possible, thereby enabling the obtained
  • the environmental detection map can be as accurate as possible.
  • a rollback operation can be performed and the detection operation can be performed again to ensure the accuracy of the environmental detection map as much as possible.
  • the coordinate values of the grid points in the above M environmental detection maps are all coordinate values in the same coordinate system.
  • the coordinate values of the grid points in the above M environmental detection maps are coordinate values located in a reference coordinate system, and the origin of the reference coordinate system is located at any one of the following positions:
  • a charging stand is near the cleaning robot, and the position of the charging stand can be selected as the origin of the reference coordinate system.
  • the M environmental detection maps can be directly set in the reference coordinate system in the process of obtaining the M environmental detection maps, so that the grid points in the M environmental detection maps are obtained.
  • the coordinate values of are all coordinate values in the same coordinate system (reference coordinate system).
  • the environmental detection map can be set in a coordinate system with the starting point when the corresponding work task starts to be executed as the coordinate origin, and then the These M environmental detection maps are transformed into the same coordinate system (reference coordinate system).
  • the foregoing acquiring M environmental detection maps includes: acquiring a first environmental detection map in the M environmental detection maps, and the coordinate values of the grid points in the first environmental detection map are located at the first coordinates The coordinate value in the reference coordinate system; the coordinate value of the grid point in the first environment detection map is converted to the coordinate value in the reference coordinate system.
  • the first environment detection map may be any one of the above M environment detection maps, and the first environment detection map may be executed according to the i-th (1 ⁇ i ⁇ M, and i is an integer) times according to the mobile robot
  • the object distribution information detected during the work task is determined, and the origin of the first coordinate system may be determined according to the starting position when the mobile robot executes the i-th task.
  • the coordinate origin of the aforementioned first coordinate system may be the starting point when the mobile robot executes the i-th task (for example, it may be the center point of the starting position of the mobile robot executing the i-th task).
  • the update of the work map of the mobile robot can be achieved through the interaction between the control device and the mobile robot, or the update of the work map can be achieved by the mobile robot alone.
  • Fig. 8 is a schematic diagram of a method for updating a working map of a mobile robot according to an embodiment of the present application.
  • the method shown in FIG. 8 can be executed jointly by the control device and the mobile robot.
  • the method shown in FIG. 8 includes steps 10010 to 10050, and these steps are respectively described in detail below.
  • the control device obtains M environmental detection maps from the mobile robot.
  • the mobile robot may send the M environment detection maps to the control device, and the control device receives the M environment detection maps.
  • step 10010 after the mobile robot generates M environmental detection maps, the control device acquires M environmental detection maps from the mobile robot at one time (the mobile robot sends M environmental detection maps), or It may be that after the mobile robot generates an environment detection map, the control device obtains an environment detection map from the mobile robot (the mobile robot sends an environment detection map after each generation of an environment detection map).
  • the control device merges the M environment detection maps to obtain the environment fusion map obtained by the current fusion.
  • step 10020 when controlling the fusion of the M environmental detection maps (which can be superimposed) to obtain the environment fusion map, either the above-mentioned M environmental detection maps can be directly integrated, or the above-mentioned M environmental detection maps can be directly integrated.
  • certain preprocessing for example, filtering processing, noise reduction processing
  • fusion is performed, or after a certain screening of the above M environmental detection maps, the environmental detection maps obtained after screening are fused.
  • the above step 10020 specifically includes: filtering M environmental detection maps to obtain M filtering environmental detection maps; fusing the M filtering environmental detection maps to obtain an environmental fusion map.
  • morphological filtering may be specifically used to extract line features of the environment detection map.
  • the control device obtains the environment layout map currently saved by the mobile robot from the mobile robot.
  • the mobile robot may send the currently saved environment layout map to the control device, and the control device receives the environment layout map currently saved by the mobile robot.
  • the control device performs weighting processing on the pixel values of the environment fusion map obtained by the current fusion and the pixel values of the currently saved environment layout map to obtain an updated environment layout map.
  • step 10040 The specific process of obtaining the updated environment layout map in step 10040 above may be the same as the specific process of obtaining the updated environment layout map indicated in step 1003, which will not be described in detail here.
  • the control device sends the updated environment layout map to the mobile robot.
  • the environment layout map currently saved by the mobile robot can be updated, so that the updated environment layout map can reflect more detailed environment layout conditions. It is convenient for the mobile robot to perform work tasks better according to the updated environment layout map.
  • the mobile robot After receiving the updated environment layout map, the mobile robot can perform tasks according to the updated environment layout map. Since the environment layout map is updated in the above process, the updated environment layout map is more accurate, so that the mobile robot can perform better work according to the updated environment layout map.
  • the control device when the object distribution information obtained by the mobile robot is not accurate due to sensor failure (for example, sensor positioning failure), the control device can be used to control the mobile robot to perform abnormal recovery, so that the obtained object The distribution information has high accuracy.
  • sensor failure for example, sensor positioning failure
  • the abnormal recovery process in the method of the present application will be described in detail below in conjunction with FIG. 9.
  • FIG. 9 is a schematic diagram of the abnormal recovery process in the method for updating the work map of a mobile robot according to an embodiment of the present application.
  • the abnormal recovery process shown in FIG. 9 includes step 20010 to step 20050, and these steps are described in detail below.
  • the control device obtains current object distribution information from the mobile robot.
  • the current object distribution information is the object distribution information detected by the mobile robot within a preset interval or a preset distance.
  • the preset interval is a period of time before the mobile robot reaches the current detection point.
  • the preset distance is the mobile robot's arrival at the current The distance moved before the detection point.
  • step 20010 the mobile robot may send current object distribution information to the control device, and after the control device receives the current object distribution information, step 20020 may be executed.
  • the mobile robot can periodically send current object distribution information to the control device.
  • the control device determines the current environment detection map according to the current object distribution information.
  • control device determines that an abnormality occurs when the mobile robot detects the surrounding environment.
  • the control device sends an abnormal recovery instruction to the mobile robot.
  • the mobile robot performs abnormal recovery in response to the abnormal recovery instruction.
  • the above-mentioned abnormal recovery instruction is used to instruct the mobile robot to perform abnormal recovery. After receiving the abnormal recovery instruction, the mobile robot performs abnormal recovery in response to the abnormal recovery instruction.
  • the aforementioned abnormal recovery instruction may include a variety of specific operation instructions, for example, a rollback instruction, a re-detection instruction, a restart instruction, and so on.
  • control device sends a retreat instruction to the mobile robot.
  • the aforementioned retreat instruction is used to instruct the mobile robot to retreat from the current detection point to the first detection point, and the distance between the first detection point and the current detection point is a preset distance. After the mobile robot receives the retreat instruction, in response to the retreat instruction, it retreats from the current detection point to the first detection point.
  • control device sends a re-detection instruction to the mobile robot.
  • the above-mentioned re-detection instruction is used to instruct the mobile robot to re-detect the surrounding environment from the first detection point to re-acquire object distribution information. After receiving the re-detection instruction, the mobile robot re-detects the surrounding environment from the first detection point in response to the re-detection instruction to re-acquire object distribution information.
  • control device sends a restart instruction to the mobile robot.
  • the above restart instruction is used to instruct the mobile robot to restart and re-detect the surrounding environment after restarting. After the mobile robot receives the restart instruction, the mobile robot restarts in response to the restart instruction, and re-detects the surrounding environment after the restart.
  • the foregoing restart instruction can instruct the mobile robot to reset the operating system, or instruct the mobile robot to restart the corresponding sensor.
  • the restart of the operating system is similar to the restart of the computer system.
  • the restart of the sensor can specifically close and reopen the port of the sensor.
  • the sensors of the above-mentioned mobile robot may include lidar, encoder, gyroscope, ultrasonic sensor, infrared sensor, and so on.
  • control device sends a clear instruction to the mobile robot.
  • the above-mentioned clearing instruction is used to clear the current object distribution information. After the mobile robot receives the clearing instruction, it clears the current object distribution information in response to the clearing instruction.
  • the update of the map can also be implemented by the mobile robot itself. This situation will be described in detail below with reference to FIG. 10.
  • FIG. 10 is a schematic flowchart of a method for updating a work map of a mobile robot according to an embodiment of the present application.
  • the method shown in FIG. 10 can be executed by the mobile robot itself.
  • the method shown in FIG. 10 includes steps 30010 to 30050, and these steps are described in detail below.
  • the above-mentioned first detection period may correspond to the time of one work period, where one work period may mean that the mobile robot completes a work task.
  • the above M environment detection maps are determined according to the object distribution information detected by the mobile robot during the movement process, and M is an integer greater than 1.
  • the currently saved environment layout map is obtained by weighting the environment fusion map obtained by the last fusion and the environment layout map saved last time, and the environment fusion map obtained by the last fusion is based on the M environment detections obtained last time Maps are fused.
  • the environment layout map currently saved by the mobile robot can be updated, so that the updated environment layout map can reflect more detailed environment layout conditions. It is convenient for the mobile robot to perform work tasks better according to the updated environment layout map.
  • the method shown in Figure 10 above is similar to the method shown in Figure 2 above.
  • the method shown in Figure 2 can be executed by the control device of a mobile robot, and the method shown in Figure 10 can be executed by a mobile robot.
  • the robot itself.
  • the content of the extension, explanation and description of the integration and update of the map in the method shown in FIG. 2 is also applicable to the method shown in FIG. 10, in order to avoid unnecessary repetition, the description will not be repeated here.
  • Fig. 11 shows the process of updating the work map by the sweeping robot during the cleaning task.
  • Step 3001 represents the start of the cleaning task.
  • step 3001 when the cleaning robot starts to perform a cleaning task, the cleaning robot is in an initial state.
  • the number of times the cleaning robot performs cleaning tasks is 0, and the set C and the set D are both empty.
  • set C is used to store the environmental detection map obtained by the sweeping robot
  • set D is used to store the environmental detection map for updating the environmental layout map.
  • the coordinate system Wi-xy may be a coordinate system with the starting point of the i-th cleaning task as the origin of the coordinates
  • the map Mi is an environmental detection map determined according to the object distribution information obtained from the i-th cleaning task.
  • step 3003 When the charging dock is not detected, continue to perform step 3003; when the charging dock is detected, perform step 3005.
  • the coordinate system Bi-xy is a coordinate system with the position of the charging base as the coordinate origin, and this coordinate system may be called a reference coordinate system or a standard coordinate system.
  • the coordinate system Bi-xy can be in addition to the coordinate system with the position of the charging stand as the coordinate origin, it can also be any of the parking position of the sweeping robot after the task is completed and the location of the garbage transfer station matching the sweeping robot.
  • the position is the coordinate system of the reference origin.
  • the map Mi obtained in step 3003 is located in the coordinate system Wi-xy.
  • the coordinates of the map Mi are converted from the coordinate system Wi-xy to the coordinate system Bi-xy.
  • Step 3005 is equivalent to transforming the initially constructed map Mi into a unified coordinate system.
  • the coordinate value of the map Mi in the coordinate system Wi-xy can be converted to the coordinate value of the map Mi in the coordinate system Bi-xy according to formula (7).
  • step 3011 the above formulas (1) and (2) can be used to determine the consistency between the two maps, and the drawings with the degree of consistency greater than the preset threshold are saved in set D, specifically The process is not described in detail here.
  • the environment fusion map may be obtained according to the maps in the set D, and then the environment fusion map is used to update the environment layout map Mr to obtain the updated environment layout map Mr.
  • step 3012 before the environment fusion map is obtained, the abnormal environment detection map in the set D can be eliminated first, and the environment layout map update set Ds is obtained, and then the environment detection map in the set Ds is used to determine the environment Fusion map.
  • Fig. 12 is a schematic block diagram of an apparatus for updating a working map of a mobile robot according to an embodiment of the present application.
  • the device 5000 shown in FIG. 12 includes: an acquiring unit 5001 and a processing unit 5002.
  • the device 5000 shown in FIG. 12 is used to execute the method for updating the work map of a mobile robot in the embodiment of the present application.
  • the acquiring unit 5001 in the device 5000 can be used to acquire M environmental detection maps
  • the processing unit 5002 is configured to acquire The M environment detection maps acquired by the unit 5001 finally obtain an updated environment layout map.
  • the above-mentioned processing unit 5002 can either finally obtain an updated environment layout map based on the acquired M environment detection maps, or can realize the control of the mobile robot.
  • the processing unit 5002 here may not only have the function of data processing, but also the function of controlling the mobile robot.
  • Fig. 13 is a schematic block diagram of an apparatus for updating a working map of a mobile robot according to an embodiment of the present application.
  • the device 6000 shown in FIG. 13 includes: an acquisition unit 6001, a processing unit 6002, and a transceiver unit 6003.
  • the acquisition unit 6001 in the above-mentioned device 6000 can be used to acquire M environmental detection maps from a mobile robot.
  • the processing unit 6002 finally obtains an updated environmental layout map based on the M environmental detection maps acquired by the acquisition unit 5001.
  • the transceiver unit 6003 uses Yu sends the updated environment layout map to the mobile robot.
  • the device 6000 shown in FIG. 13 may be equivalent to the control device in FIGS. 8 and 9, and the device 6000 may execute the steps executed by the control device in FIGS. 8 and 9.
  • Fig. 14 is a schematic block diagram of a mobile robot according to an embodiment of the present application.
  • the mobile robot 7000 shown in FIG. 14 includes a detection unit 7001 and a processing unit 7002.
  • the detection unit 7001 in the mobile robot 7000 is used to detect the surrounding environment during movement to obtain environment layout information, and the processing unit 7002 is used to process the environment layout information obtained by the detection unit 7001 to finally obtain an updated environment layout map .
  • the mobile robot 7000 shown in FIG. 14 can perform steps 30010 to 30050 in the method shown in FIG. 10.
  • the above-mentioned mobile robot 7000 may further include a transceiver unit.
  • Fig. 15 is a schematic block diagram of a mobile robot according to an embodiment of the present application.
  • the mobile robot 7000 shown in FIG. 15 includes a transceiver unit 7003 in addition to the detection unit 7001 and the processing unit 7002 shown in FIG. 14.
  • the detection unit 7001 is used to detect the surrounding environment to obtain environment layout information
  • the processing unit 7002 is used to determine an environment detection map according to the environment layout information detected by the detection unit 7001 in a detection period
  • the transceiver unit 7003 is used to direct
  • the control device sends M environmental detection maps
  • the transceiver unit 7003 may also receive an updated environmental layout map from the control device.
  • the mobile robot 7000 shown in FIG. 15 may be equivalent to the mobile robot in FIGS. 8 and 9, and the mobile robot 7000 may perform the steps performed by the mobile robot in FIGS. 8 and 9.
  • the device 8000 shown in FIG. 16 includes a memory 8001, a processor 8002, a communication interface 8003, and a bus 8004. Among them, the memory 8001, the processor 8002, and the communication interface 8003 implement communication connections between each other through the bus 8004.
  • the aforementioned device 8000 may be a control device for controlling a mobile robot, or a mobile robot.
  • the processor 8002 in the device 8000 can obtain the corresponding data and implement the processing of the corresponding data (for example, obtain M environmental layout maps, and finally obtain an updated environmental layout map according to the M environmental layout maps), or Control of the mobile robot (for example, control the mobile robot to perform a retreat operation, control the mobile robot to clear the current object distribution information).
  • the processor 8002 in the device 8000 can obtain the corresponding data and implement the processing of the corresponding data (for example, obtain M environmental layout maps, and finally obtain an updated environmental layout map according to the M environmental layout maps), or Control of the mobile robot (for example, control the mobile robot to perform a retreat operation, control the mobile robot to clear the current object distribution information).
  • the memory 8001 may be a read only memory (ROM), a static storage device, a dynamic storage device, or a random access memory (RAM).
  • the memory 8001 may store a program. When the program stored in the memory 8001 is executed by the processor 8002, the processor 8002 is configured to execute each step of the method for updating the work map of the mobile robot in the embodiment of the present application.
  • the processor 8002 may adopt a general central processing unit (CPU), a microprocessor, an application specific integrated circuit (ASIC), a graphics processing unit (GPU), or one or more
  • the integrated circuit is used to execute related programs to implement the method for updating the work map of the mobile robot in the method embodiment of the present application.
  • the processor 8002 may also be an integrated circuit chip with signal processing capabilities.
  • each step of the method for updating the work map of a mobile robot of the present application can be completed by hardware integrated logic circuits in the processor 8002 or instructions in the form of software.
  • the aforementioned processor 8002 may also be a general-purpose processor, digital signal processing (DSP), ASIC, ready-made programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, discrete gates or transistor logic Devices, discrete hardware components.
  • DSP digital signal processing
  • FPGA ready-made programmable gate array
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers.
  • the storage medium is located in the memory 8001, and the processor 8002 reads the information in the memory 8001, combines its hardware to complete the functions required by the units included in the device, or executes the method of updating the work map of the mobile robot in the method embodiment of the application.
  • the communication interface 8003 uses a transceiver device such as but not limited to a transceiver to implement communication between the device 8000 and other devices or communication networks. For example, the information of the neural network to be constructed and the training data needed in the process of constructing the neural network can be obtained through the communication interface 8003.
  • a transceiver device such as but not limited to a transceiver to implement communication between the device 8000 and other devices or communication networks. For example, the information of the neural network to be constructed and the training data needed in the process of constructing the neural network can be obtained through the communication interface 8003.
  • the bus 8004 may include a path for transferring information between various components of the device 8000 (for example, the memory 8001, the processor 8002, and the communication interface 8003).
  • the acquisition unit 5001 and the processing unit 5002 in the device 5000 are equivalent to the processor 8002 in the device 8000.
  • the acquisition unit 6001 and the processing unit 6002 in the above device 6000 are equivalent to the processor 8002 in the device 8000, and the transceiver unit 6003 is equivalent to the communication interface 8003 in the device 8000.
  • the detection unit 7001 and the processing unit 7002 in the device 7000 shown in FIG. 14 or FIG. 15 are equivalent to the processor 6002 in the device 8000, and the transceiver unit 7003 in the device 7000 shown in FIG. 15 is equivalent to the communication in the device 8000 Interface 8003.
  • Fig. 17 is a schematic block diagram of a mobile robot according to an embodiment of the present application.
  • the mobile robot 9000 shown in FIG. 17 includes many functional modules, and these functional modules can be divided into different levels according to their mutual support relationship. Among them, the modules at the bottom layer support the functions of the modules at the upper layer.
  • Mobile robot 9000 specifically includes: motion robot platform 9011, sensor 9021, communication system 9022, operating system 9023, motion planning 9024, laser SLAM 9031, planning navigation 9032, environment memory 9033, task management 9041, abnormal and recovery 9042, intelligent service 9051 .
  • the robot motion platform 9011 includes hardware units such as a robot chassis, a motor drive unit, a power management unit, and a main control unit.
  • the robot motion platform 9011 can control the motion of the robot according to certain instructions. For example, in this application, when the mobile robot needs to retreat from the current detection point to the first detection point, the robot motion platform can be used to control the mobile robot to move from the current detection point to the first detection point.
  • the sensor 9021 may specifically include a lidar, an encoder, a gyroscope, an inertial measurement unit (IMU), an ultrasonic sensor, an infrared sensor, and so on. In this application, the sensor 9021 can detect surrounding objects to obtain object distribution information.
  • a lidar an encoder
  • a gyroscope an inertial measurement unit (IMU)
  • IMU inertial measurement unit
  • ultrasonic sensor an ultrasonic sensor
  • an infrared sensor an infrared sensor
  • the communication system 9022 can use serial communication, Ethernet communication or CAN bus system for communication.
  • the mobile robot can communicate with the control device that controls the mobile robot through the communication system 9022.
  • the mobile robot can send the detected object distribution information to the control device through the communication system 9022, and the control device can use
  • the communication system 9022 sends the updated environment layout map to the mobile robot.
  • the aforementioned communication system 9022 may be equivalent to the communication interface 8003 in the device 8000.
  • the operating system 9023 may be a Linux system.
  • Motion planning 9024 can plan the walking path of the robot.
  • Laser SLAM 9031 refers to an algorithm module that mainly uses lasers to realize synchronous mapping and positioning.
  • the map format generated by laser SLAM 9031 is a raster map.
  • the laser SLAM 9031 can generate an environment detection map based on the object distribution information obtained by detection.
  • the planning and navigation 9032 is responsible for completing the robot's autonomous motion and obstacle avoidance, and also includes functions such as full coverage required by other service tasks.
  • the environment memory module 9033 can be used to save the environment layout map obtained by the robot.
  • the environment memory module 9033 may save the environment layout map, and after obtaining the updated environment layout map, update the original environment layout map.
  • Task management 9041 is mainly to complete robot state management, user instruction interaction, and task management.
  • Abnormality and recovery 9042 is used to recover when the robot is abnormal.
  • the abnormality and recovery 9042 can control the mobile robot to retreat from the current detection point to the first detection point when the difference between the current environment detection map and the environment layout map is large, and control to restart from the first detection point. Start detection.
  • Intelligent service 9051 based on the above-mentioned main modules, intelligent service 9051 can independently and intelligently complete service work for the family or customers.
  • the smart service 9051 may include an interface for interacting with the user, through which the user can flexibly set tasks, adjust related parameters for executing tasks, and so on.
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • each unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of this application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the method described in each embodiment of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, ROM, RAM, magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

一种智能控制技术领域中的更新移动机器人(7000)工作地图的方法、装置(5000)及存储介质。方法包括:根据移动机器人(7000)在运动过程中探测得到的物体分布信息确定多张环境探测地图;对多张环境探测地图进行融合,得到环境融合地图,然后将环境融合地图与移动机器人(7000)当前保存的环境布局地图进行加权处理,得到更新后的环境布局地图。通过移动机器人(7000)工作时获取到的多张探测地图对移动机器人(7000)保存的环境布局地图进行更新,使得更新后的环境布局地图能够反映出更详细的环境布局情况。便于移动机器人(7000)后续更好地执行工作任务。

Description

更新移动机器人工作地图的方法、装置及存储介质
本申请要求于2019年07月02日提交中国专利局、申请号为201910588829.0、申请名称为“更新移动机器人工作地图的方法、装置及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及智能控制技术领域,并且更具体地,涉及一种更新移动机器人工作地图的方法、装置及存储介质。
背景技术
随着智能控制技术的发展,移动机器人的应用越来越普遍。移动机器人在进行工作时通常通过同步定位与地图构建(simultaneous localization and mapping,SLAM)的方式来得到工作环境下的环境布局地图,然后在具体工作时根据该环境布局地图进行路径规划,进而按照相应的路径执行相应的工作任务。
由于移动机器人(特别是家庭服务机器人)一般采用消费级的传感器和驱动系统,没有外部的辅助定位设备,定位精度和可靠性都不高。因此,为了降低移动机器人单次定位异常对路径规划的影响,移动机器人一般不保存环境布局地图,每次工作都会重新建图,然后根据新建的环境布局地图来进行路径规划(例如,每次开始执行任务执行先对周围的环境进行大致的探测,根据探测得到的物体分布信息来构建环境布局地图,然后再根据环境布局地图进行路径规划)。
通过上述建图方式得到的环境布局地图一般不够全面,不能够反映工作场所内的物体的详细布局情况。
发明内容
本申请提供一种更新移动机器人工作地图的方法、装置及存储介质,以得到更能全面反映移动机器人工作环境布局情况的地图。
第一方面,提供了一种更新移动机器人工作地图的方法,该方法包括:获取M张环境探测地图;对M张环境探测地图进行融合,以得到环境融合地图;对环境融合地图的像素值和环境布局地图的像素值进行加权处理,得到更新后的环境布局地图。
其中,上述M张环境探测地图是根据移动机器人在运动过程中探测得到的物体分布信息确定的,M为大于1的整数。
在本申请中,环境探测地图可以是根据移动机器人探测得到的物体分布信息确定的,而环境布局地图一般是通过多个环境探测地图进行融合得到的。环境探测地图和环境布局地图都能够反映工作环境下的物体分布情况,与环境探测地图相比,环境布局地图一般更能够较为全面的反映工作环境下的物体的整体分布或者布局情况。
在对M张环境探测地图进行融合(具体可以是叠加)得到环境融合地图时,既可以直接对上述M张环境探测地图直接进行融合,也可以是对上述M张环境探测地图做了一定的预处理(例如,滤波处理,降噪处理)之后再进行融合,也可以是对上述M张环境探测地图做了一定筛选之后,对筛选后得到的环境探测地图进行融合。
应理解,上述环境布局地图可以是移动机器人根据之前获取到的多张环境探测地图融合得到的。
特别地,当移动机器人在一个新的工作场所执行任务时,可以先在该工作场所下进行探测,根据探测到的物体分布信息来确定多张环境探测地图,然后根据该多张环境探测地图来确定环境布局地图。在后续就可以按照本申请的方法对该环境布局地图进行不断的更新和优化了。
可选地,上述对M张环境探测地图进行融合,以得到环境融合地图,包括:对M张环境探测地图进行滤波处理,得到M张滤波处理后的环境探测地图;对该M张滤波处理后的环境探测地图进行融合,得到环境融合地图。
通过对环境探测地图进行滤波处理,能够去除掉环境中小的物体的干扰,保留环境的主要布局。
在对上述环境探测地图进行滤波处理时,具体可以采用形态滤波,以提取环境探测地图的线特征。
可选地,上述环境布局地图是所述移动机器人当前保存的第一工作场所的环境布局地图。
当完成了环境布局地图的更新,得到更新后的环境布局地图之后,移动机器人可以将原来的环境布局地图替换掉,保存更新后的环境布局地图。直到下次更新地图时再根据探测到的环境探测地图对环境布局地图进行再次更新。
本申请中,通过移动机器人工作时获取到的多张探测地图对移动机器人保存的环境布局地图进行更新,使得更新后的环境布局地图能够反映出更详细的环境布局情况。便于移动机器人后续根据该更新后的环境布局地图更好地执行工作任务。
在完成环境布局地图的更新之后,可以控制所述移动机器人根据所述更新后的环境布局地图在第一场所下执行工作任务。
由于更新后的环境布局地图能够更好地更全面的反映第一场所下的物体分布情况,因此,移动机器人根据更新后的环境布局地图能够更好地在第一场所下执行工作任务。
可选地,上述M是预先设置的数值。
上述M可以是预先由厂家设置的数值,也可以是由用户设置的数值。
除了预先设置之外,上述M的数值也可以通过环境探测地图数量配置信息来灵活设置,通过该环境探测地图数量配置信息能够直接确定要获取的环境探测地图的数量。
可选地,在获取M张环境探测地图之前,上述方法还包括:获取环境探测地图数量配置信息,该环境探测地图数量配置信息包括M的数值;根据环境探测地图数量配置信息确定所述M的数值。
通过环境探测地图数量配置信息能够灵活的实现对M的设置。
上述环境探测地图数量配置信息可以是用户输入的信息,例如,当上述方法由控制装置执行时,用户可以通过控制装置的控制界面输入上述环境探测地图数量配置信息,该控 制装置既可以是集成在移动机器人中的模块,也可以是独立于移动机器人之外的单独的设备。
另外,上述M的数值可以通过环境探测地图数量修改信息进行调整,以增加或者减小M的数值。例如,可以通过向控制装置发送环境探测地图数量修改信息来实现对M的数值的调整。
结合第一方面,在第一方面的某些实现方式中,在进行加权处理时,环境融合地图的像素值对应的权重为第一权重,环境布局地图的像素值对应的权重为第二权重,第一权重和第二权重的大小是根据移动机器人的地图更新需求确定的。
移动机器人的地图更新需求可以是移动机器人(执行任务时)对环境布局地图的更新的快慢(或者频率)的要求,也可以是移动机器人(执行任务时)对环境布局地图的更新幅度的要求。
一般来说,当移动机器人要求环境布局地图快速更新时,可以将第一权重设置成一个较大的数值,而将第二权重设置成一个较小的数值。反之,当移动机器人并不要求环境布局地图快速更新时,可以将第一权重设置成一个较小的数值,而将第二权重设置成一个较大的数值。
另外,当移动机器人要求环境布局地图更新幅度较大时,可以将第一权重设置成一个较大的数值,而将第二权重设置成一个较小的数值。反之,移动机器人要求环境布局地图更新幅度较小时,可以将第一权重设置成一个较小的数值,而将第二权重设置成一个较大的数值。
本申请中,由于第一权重和第二权重是根据移动机器人的地图更新需求确定的,因此,可以根据移动机器人的地图更新需求来灵活更新环境布局地图。
结合第一方面,在第一方面的某些实现方式中,第一权重和第二权重的大小是根据移动机器人的地图更新需求确定的,包括:第一权重与移动机器人需要的环境布局地图更新频率为正相关关系,第二权重与移动机器人需要的环境布局地图更新频率为反相关关系。
也就是说,当移动机器人(执行任务时)要求的地图更新频率较高时,可以将第一权重设置成一个较大的数值,而将第二权重设置成一个较小的数值(例如,第一权重值设置成0.7,第二权重值设置为0.3)。反之,当移动机器人(执行任务时)要求的地图更新频率较低时,可以将第一权重设置成一个较小的数值,而将第二权重设置成一个较大的数值(例如,第一权重值设置成0.3,第二权重值设置为0.7)。
结合第一方面,在第一方面的某些实现方式中,第一权重和第二权重的大小是根据移动机器人的地图更新需求确定的,包括:第一权重与移动机器人需要的环境布局地图更新幅度为正相关关系,第二权重与移动机器人需要的环境布局地图更新幅度为反相关关系。
结合第一方面,在第一方面的某些实现方式中,对M张环境探测地图进行融合,以获取当前融合得到的环境融合地图,包括:从M张环境探测地图中确定出N张环境探测地图;对N张环境探测地图进行融合,获取当前融合得到的环境融合地图。
其中,上述N张环境探测地图中任意两张环境探测地图之间的一致度大于或者等于第一阈值,N为小于或者等于M的正整数。
通过从M张环境探测地图中选择出一致度较好的环境探测地图进行融合,能够得到更加准确的环境融合地图。
结合第一方面,在第一方面的某些实现方式中,上述方法还包括:获取当前物体分布信息;根据该当前物体分布信息确定当前环境探测地图;在当前环境探测地图与环境布局地图的一致度小于第二阈值的情况下,确定移动机器人在对周围环境进行探测时发生异常;控制平面移动人进行异常恢复。
其中,上述当前物体分布信息是移动机器人在预设间隔内或者预设距离内探测到的物体分布信息,该预设间隔是移动机器人到达当前探测点之前的一段时间间隔,该预设距离是移动机器人到达当前探测点之前移动的一段距离。
上述预设间隔可以是人为设置的一段时间间隔,上述预设距离可以是人为设置的一段距离。上述预设间隔和预设距离可以根据具体的需求来灵活设置。
上述移动机器人对周围环境进行探测时发生异常,可以是指移动机器人在对周围环境探测过程中由于某种故障导致无法探测得到准确的物体分布信息。
具体地,移动机器人可能会是由于定位故障、传感器故障或者移动机器人内部某些处理模块的故障,导致移动机器人无法探测得到准确的物体分布信息。
结合第一方面,在第一方面的某些实现方式中,控制平面移动人进行异常恢复,包括:控制移动机器人从当前探测点回退到第一探测点,该第一探测点与当前探测点之间的距离为预设距离;控制移动机器人从第一探测点开始重新对周围环境进行探测,以重新获取物体分布信息。
本申请中,在当前环境探测地图与环境布局地图的差异较大时,通过执行回退操作并重新开始对周围环境探测,重新获取物体分布信息,能够尽可能的保证移动机器人获取到的物体分布信息的准确性。
可选地,上述控制所述平面移动人进行异常恢复,包括:控制移动机器人对操作系统进行复位。
上述控制移动机器人对操作系统进行复位相当于控制移动机器人进行系统的重启(类似于电脑的重启)。
可选地,上述控制所述平面移动人进行异常恢复,包括:控制移动机器人对移动机器人的传感器进行重启。
上述控制移动机器人对移动机器人的传感器进行重启,具体可以是指控制移动机器人将传感器的端口关闭后再重新打开。
上述移动机器人的传感器可以包括激光雷达,编码器,陀螺仪,以及超声传感器,红外传感器等等。
结合第一方面,在第一方面的某些实现方式中,上述方法还包括:控制移动机器人清除当前物体分布信息。
通过控制移动机器人清除当前物体分布信息,能够将可能由于定位异常或者传感器故障而获取到的不太准确的物体分布信息清除掉,能够尽可能的保证移动机器人获取到的物体分布信息的准确性。
应理解,上述控制移动机器人清除当前物体分布信息的操作既可以视为异常恢复之外的额外操作,也可以视为异常恢复操作的一部分。
结合第一方面,在第一方面的某些实现方式中,上述M张环境探测地图均位于同一坐标系中。
当上述M个环境探测地图是位于同一坐标系中的地图时,便于对M个环境探测地图进行更精准的融合,从而得到更加准确的环境融合地图。
结合第一方面,在第一方面的某些实现方式中,上述M张环境探测地图均是位于参考坐标系中的地图,该参考坐标系的原点可以位于以下任意一个位置中:移动机器人的充电座所在的位置;移动机器人结束任务后的停靠位置;与移动机器人匹配的垃圾中转站所在的位置。
应理解,以上各个位置(充电座所在的位置、移动机器人的停靠位置以及垃圾中转站所在的位置)的中心点或者其他位置的点(例如,某个特定位置的点)均可以作为参考坐标系的原点。
可选地,上述获取M张环境探测地图,包括:获取所述M张环境探测地图中的第一环境探测地图,所述第一环境探测地图中的栅格点的坐标值是位于第一坐标系下的坐标值;将所述第一环境探测地图中的栅格点的坐标值转换到参考坐标系下的坐标值。
上述第一环境探测地图可以是上述M张环境探测地图中的任意一张环境探测地图,该第一环境探测地图可以是根据移动机器人执行第i(1≤i≤M,且i为整数)次工作任务时探测到的物体分布信息确定的,上述第一坐标系的原点可以是根据移动机器人执行该第i次任务时的起始位置确定的。
具体地,上述第一坐标系的坐标原点可以是移动机器人执行该第i次任务时的起始点(例如,可以是移动机器人执行第i次任务的起始位置的中心点)。
结合第一方面,在第一方面的某些实现方式中,上述M张境探测地图是根据移动机器人在第一工作场所下执行工作任务时探测得到的物体分布信息确定的。
本申请中,通过移动机器人执行工作任务时探测得到的物体分布信息来确定M张环境探测信息,能够在移动机器人执行其他任务的同时获取到物体分布信息,可以提高工作效率。
结合第一方面,在第一方面的某些实现方式中,上述M张环境探测地图分别是根据M个物体分布信息确定的,所述M个物体分布信息分别是移动机器人在第一工作场所下执行M次工作任务时对周围环境进行探测得到的物体分布信息。
上述M张环境探测地图中的任意一个环境探测地图都是根据移动机器人执行对应的一次工作任务时探测得到的物体分布信息确定的。
本申请中,由于M张环境探测地图中的每张环境探测地图都是根据移动机器人在第一工作场所下执行一次工作任务时获取到的物体分布信息确定的,使得每张环境探测地图都能够尽可能全面的反映第一场所下的物体分布情况,进而使得最终得到的更新后的环境布局地图能够较为全面的反映第一场所的物体分布情况。
可选地,上述获取M张环境探测地图,包括:根据所述移动机器人在第一工作场所下执行工作任务时对周围环境进行探测得到的物体分布信息得到环境探测地图,直到得到M张环境探测地图。
在本申请中,可以对获取到的环境探测地图的数量进行实时统计,当环境探测地图的数量达到M时,就可以根据该M张环境探测地图确定环境融合地图。
应理解,当根据上述M张环境探测地图确定环境融合地图时,还可以继续进行环境探测地图的获取,当再次获取到M张环境探测地图时,可以重复执行第一方面中的方法 中的过程。
可选地,上述根据所述移动机器人在第一工作场所下执行工作任务时对周围环境进行探测得到的物体分布信息得到环境探测地图,直到得到M张环境探测地图,包括:根据移动机器人在第一工作场所下执行一次工作任务时对周围环境进行探测得到的一个物体分布信息得到一张环境探测地图,直到得到M张环境探测地图。
第二方面,提供了一种更新移动机器人工作地图的方法,该方法包括:从移动机器人获取M张环境探测地图;对M张环境探测地图进行融合,以获取当前融合得到的环境融合地图;从移动机器人获取移动机器人当前保存的环境布局地图;对当前融合得到的环境融合地图的像素值和当前保存的环境布局地图的像素值进行加权处理,得到更新后的环境布局地图;将更新后的环境布局地图发送给移动机器人。
上述M张环境探测地图是根据移动机器人在运动过程中探测得到的物体分布信息确定的,M为大于1的整数。
另外,当前保存的环境布局地图是对上次融合得到的环境融合地图和上次保存的环境布局地图进行加权处理得到的,而上次融合得到的环境融合地图是根据上次获取到的M张环境探测地图进行融合得到的。
上述第二方面的方法可以由用于控制移动机器人工作的控制装置来执行。
本申请中,通过从移动机器人获取移动机器人工作时得到的多张探测地图,能够对移动机器人当前保存的环境布局地图进行更新,使得更新后的环境布局地图能够反映出更详细的环境布局情况。便于移动机器人后续根据该更新后的环境布局地图更好地执行工作任务。
可选地,上述从移动机器人获取M张环境探测地图,包括:从所述移动机器人接收所述M张环境探测地图。
应理解,在上述获取M张环境探测地图的过程中,既可以是在移动机器人生成了M张环境探测地图之后,一次性的从移动机器人获取M张环境探测地图(移动机器人发送M张环境探测地图),也可以是在移动机器人每生成一张环境探测地图之后,就从该移动机器人获取一张环境探测地图(移动机器人每生成一张环境探测地图之后就发送一张环境探测地图)。
结合第二方面,在第二方面的某些实现方式中,上述方法还包括:在对M张环境探测地图进行融合之前,上述方法还包括:确定是否获取到M张环境探测地图。
在对M张环境探测地图进行融合之前,要对环境探测地图的数量进行统计,当获取到的环境探测地图的数量达到M张之后,再对该M张环境探测地图进行融合。
结合第二方面,在第二方面的某些实现方式中,上述方法还包括:从移动机器人获取当前物体分布信息;根据当前物体分布信息确定当前环境探测地图;在当前环境探测地图与当前保存的环境布局地图的一致度小于第二阈值的情况下,确定移动机器人在对周围环境进行探测时发生异常;向移动机器人发送异常恢复指令。
其中,当前物体分布信息是移动机器人在预设间隔内或者预设距离内探测到的物体分布信息,该预设间隔是移动机器人到达当前探测点之前的一段时间间隔,该预设距离是移动机器人到达当前探测点之前移动的一段距离。
上述异常恢复指令用于指示移动机器人进行异常恢复,在接收到该异常恢复指令之 后,移动机器人响应于该异常恢复指令进行异常恢复。
结合第二方面,在第二方面的某些实现方式中,上述进行异常恢复,包括:向移动机器人发送回退指令;向移动机器人发送重新探测指令。
上述回退指令用于指示移动机器人从当前探测点回退到第一探测点,该第一探测点与当前探测点之间的距离为预设距离。当移动机器人接收到回退指令之后,响应于该回退指令,从当前探测点回退到第一探测点。
上述重新探测指令用于指示移动机器人从第一探测点开始重新对周围环境进行探测,以重新获取物体分布信息。当移动机器人接收到重新探测指令之后,响应于该重新探测指令从第一探测点开始重新对周围环境进行探测,以重新获取物体分布信息。
结合第二方面,在第二方面的某些实现方式中,上述进行异常恢复,包括:向移动机器人发送重启指令。
上述重启指令用于指示移动机器人进行重启,并在重启后重新对周围环境进行探测。当移动机器人接收到该重启指令之后,移动机器人响应于该重启指令,进行重启,并在重启之后重新对周围环境进行探测。
上述通过重启指令既可以指示移动机器人对操作系统进行复位,也可以指示移动机器人对相应的传感器进行重启。
其中,对操作系统的重启类似于对电脑系统的重启,传感器的重启具体可以将传感器的端口关闭后再重新打开。
上述移动机器人的传感器可以包括激光雷达,编码器,陀螺仪,以及超声传感器,红外传感器等等。
结合第二方面,在第二方面的某些实现方式中,上述方法还包括:向移动机器人发送清除指令。
上述清除指令用于清除当前物体分布信息,当移动机器人接收到清除指令之后,响应于该清除指令,将当前物体分布信息清除。
通过控制移动机器人清除当前物体分布信息,能够将可能由于定位异常或者传感器故障而获取到的不太准确的物体分布信息清除掉,能够尽可能的保证移动机器人获取到的物体分布信息的准确性。
结合第二方面,在第二方面的某些实现方式中,上述M张环境探测地图均位于同一坐标系中。
由于上述M个环境探测地图是位于同一坐标系中,因此,对上述M个环境探测地图进行更精准的融合,从而得到更加准确的环境融合地图。
结合第二方面,在第二方面的某些实现方式中,上述M张环境探测地图均是位于参考坐标系中的地图,该参考坐标系的原点可以位于以下任意一个位置中:移动机器人的充电座所在的位置;移动机器人结束任务后的停靠位置;与移动机器人匹配的垃圾中转站所在的位置。
具体地,以上各个位置中的任意一个位置的中心点或者其他特定位置的点都可以作为上述参考坐标系的原点。
第三方面,提供了一种更新移动机器人工作地图的方法,该方法包括:在运动过程中对周围环境进行探测,得到环境布局信息;根据一次探测周期探测得到的环境布局信息确 定一张环境探测地图;向控制装置发送M张环境探测地图;从控制装置接收更新后的环境布局地图。
其中,更新后的环境布局地图是控制装置对当前融合得到的环境融合地图的像素值和当前保存的环境布局地图的像素值进行加权处理得到的,所述当前融合得到的环境融合地图是对所述M张环境探测地图进行融合得到的。
另外,上述M张环境探测地图是根据移动机器人在运动过程中探测得到的物体分布信息确定的,M为大于1的整数。
当前保存的环境布局地图是对上次融合得到的环境融合地图和上次保存的环境布局地图进行加权处理得到的,而上次融合得到的环境融合地图是根据上次获取到的M张环境探测地图进行融合得到的。
上述第三方面的方法可以由移动机器人来执行。
本申请中,移动机器人通过在工作时获取M张探测地图,并将M张环境探测地图发送给控制装置,使得控制装置能够根据该M张环境探测地图对移动机器人当前保存的环境布局地图进行更新,使得更新后的环境布局地图能够反映出更详细的环境布局情况。便于移动机器人后续根据该更新后的环境布局地图更好地执行工作任务。
应理解,移动机器人向控制装置发送M张环境探测地图,既可以是在移动机器人全部生成该M张环境探测地图之后,将该M张环境探测地图发送给控制装置,也可以是移动移动机器人在每生成一张环境探测地图之后,就向控制装置发送一张环境探测地图,直到向移动机器人发送了M张环境探测地图。
结合第三方面,在第三方面的某些实现方式中,上述方法还包括:向控制装置发送当前物体分布信息,以便于控制装置根据该当前物体分布信息确定当前环境探测地图;接收控制装置发送的异常恢复指令,响应于该异常恢复指令进行异常恢复。
其中,当前物体分布信息是移动机器人在预设间隔内或者预设距离内探测到的物体分布信息,该预设间隔是移动机器人到达当前探测点之前的一段时间间隔,该预设距离是移动机器人到达当前探测点之前移动的一段距离。
上述异常恢复指令是控制装置在当前环境探测地图与当前保存的环境布局地图的一致度小于第二阈值的情况下生成的,在当前环境探测地图与当前保存的环境布局地图的一致度小于第二阈值的情况下控制装置确定移动机器人在对周围环境进行探测时发生异常,向移动机器人发送异常恢复指令。
上述异常恢复指令用于指示移动机器人进行异常恢复,在接收到该异常恢复指令之后,移动机器人响应于该异常恢复指令进行异常恢复。
结合第三方面,在第三方面的某些实现方式中,接收控制装置发送的异常恢复指令,响应于该异常恢复指令进行异常恢复,包括:接收控制装置发送的回退指令和重新探测指令;响应于该回退指令,从当前探测点回退到第一探测点;响应于该重新探测指令从第一探测点开始重新对周围环境进行探测,以重新获取物体分布信息。
上述回退指令和重新探测指令可以是异常恢复指令中的两个具体的指令。
上述异常恢复指令还可以包括重启指令,该重启指令用于指示移动机器人进行重启,并在重启后重新对周围环境进行探测。当移动机器人接收到该重启指令之后,移动机器人响应于该重启指令,进行重启,并在重启之后重新对周围环境进行探测。
上述通过重启指令既可以指示移动机器人对操作系统进行复位,也可以指示移动机器人对相应的传感器进行重启。
其中,对操作系统的重启类似于对电脑系统的重启,传感器的重启具体可以将传感器的端口关闭后再重新打开。
上述移动机器人的传感器可以包括激光雷达,编码器,陀螺仪,以及超声传感器,红外传感器等等。
结合第三方面,在第三方面的某些实现方式中,上述方法还包括:接收控制装置发送的清除指令,响应于该清除指令,将当前物体分布信息清除。
通过将当前物体分布信息清除,能够将可能由于定位异常或者传感器故障而获取到的不太准确的物体分布信息清除掉,能够尽可能的保证移动机器人获取到的物体分布信息的准确性。
由于上述M个环境探测地图是位于同一坐标系中,因此,对上述M个环境探测地图进行更精准的融合,从而得到更加准确的环境融合地图。
结合第三方面,在第三方面的某些实现方式中,上述M张环境探测地图均是位于参考坐标系中的地图,该参考坐标系的原点可以位于以下任意一个位置中:移动机器人的充电座所在的位置;移动机器人结束任务后的停靠位置;与移动机器人匹配的垃圾中转站所在的位置。
具体地,以上各个位置中的任意一个位置的中心点或者其他特定位置的点都可以作为上述参考坐标系的原点。
第四方面,提供了一种更新移动机器人工作地图的方法,该方法包括:在运动过程中对周围环境进行探测,得到环境布局信息;根据一次探测周期探测得到的环境布局信息确定一张环境探测地图;在获取到M张环境探测地图的情况下,对M张环境探测地图进行融合,以获得当前融合得到的环境融合地图;对当前融合得到的环境融合地图的像素值和当前保存的环境布局地图的像素值进行加权处理,得到更新后的环境布局地图。
其中,当前保存的环境布局地图是对上次融合得到的环境融合地图和上次保存的环境布局地图进行加权处理得到的,而上次融合得到的环境融合地图是根据上次获取到的M张环境探测地图进行融合得到的。
上述第四方面的方法可以由移动机器人来执行。
本申请中,通过获取移动机器人工作时得到的多张探测地图,能够对移动机器人当前保存的环境布局地图进行更新,使得更新后的环境布局地图能够反映出更详细的环境布局情况。便于移动机器人后续根据该更新后的环境布局地图更好地执行工作任务。
结合第四方面,在第四方面的某些实现方式中,上述方法还包括:在对M张环境探测地图进行融合之前,上述方法还包括:确定是否获取到M张环境探测地图。
具体地,可以在每生成一张环境探测地图之后,可以对环境探测地图的数量进行统计,确定已经生成的环境探测地图的数量是否达到了M。
结合第四方面,在第四方面的某些实现方式中,上述方法还包括:根据当前物体分布信息确定当前环境探测地图;在当前环境探测地图与当前保存的环境布局地图的一致度小于第二阈值的情况下,确定移动机器人在对周围环境进行探测时发生异常;进行异常恢复。
其中,当前物体分布信息是移动机器人在预设间隔内或者预设距离内探测到的物体分 布信息,预设间隔是移动机器人到达当前探测点之前的一段时间间隔,预设距离是移动机器人到达当前探测点之前移动的一段距离。
结合第四方面,在第四方面的某些实现方式中,进行异常恢复,包括:从当前探测点回退到第一探测点,第一探测点与当前探测点之间的距离为预设距离;从第一探测点开始重新对周围环境进行探测,以重新获取物体分布信息。
具体地,当第四方面的方法由移动机器人执行时,移动机器人可以通过控制移动机器人运动平台来实现从当前探测点回退到第一探测点。移动机器人可以通过控制传感器重新对周围环境进行探测,以重新获取物体分布信息。
结合第四方面,在第四方面的某些实现方式中,上述方法还包括:清除当前物体分布信息。
具体地,当第四方面的方法由移动机器人执行时,移动机器人可以对存储模块存储的当前物体分布信息进行擦除。
结合第四方面,在第四方面的某些实现方式中,上述进行异常恢复,包括:执行重启操作。
在重启之后,可以重新对周围环境进行探测。
上述重启操作既可以是对操作系统进行复位,也可以是对相应的传感器进行重启。
其中,对操作系统的重启类似于对电脑系统的重启,传感器的重启具体可以将传感器的端口关闭后再重新打开。
上述移动机器人的传感器可以包括激光雷达,编码器,陀螺仪,以及超声传感器,红外传感器等等。
应理解,上述第一方面的方法中对相应过程(例如,环境融合地图的生成过程,地图的更新过程等等)和相应信息(例如,环境探测地图)的扩展和说明同样适用于第二方面、第三方面以及第四方面中的任意一个方面的方法。
第五方面,提供了一种更新移动机器人工作地图的装置,该装置包括用于执行上述第一方面或者第二方面中的方法中的各个模块。
第六方面,提供了一种移动机器人,该移动机器人包括用于执行上述第三方面或者第四方面中的方法中的各个模块。
第七方面,提供了一种更新移动机器人工作地图的装置,该装置包括:存储器,用于存储程序;处理器,用于执行所述存储器存储的程序,当所述存储器存储的程序被执行时,所述处理器用于执行上述第一方面中的方法。
第八方面,提供了一种更新移动机器人工作地图的装置,该装置包括:收发器;存储器,用于存储程序;处理器,用于执行所述存储器存储的程序,当所述存储器存储的程序被执行时,所述收发器和所述处理器用于执行上述第二方面中的方法。
第九方面,提供了一种移动机器人,该移动机器人包括:收发器;存储器,用于存储程序;处理器,用于执行所述存储器存储的程序,当所述存储器存储的程序被执行时,所述收发器和所述处理器用于执行上述第三方面中的方法。
第十方面,提供了一种移动机器人,该移动机器人包括:存储器,用于存储程序;处理器,用于执行所述存储器存储的程序,当所述存储器存储的程序被执行时,所述处理器用于执行上述第四方面中的方法。
第十一方面,提供一种计算机存储介质,该计算机存储介质存储有程序代码,该程序代码包括用于执行第一方面、第二方面、第三方面以及第四方面中的任意一个方面的方法中的步骤的指令。
上述存储介质具体可以是非易失性存储介质。
第十二方面,提供一种包含指令的计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述第一方面、第二方面、第三方面以及第四方面中的任意一个方面中的方法。
第十三方面,提供一种芯片,所述芯片包括处理器与数据接口,所述处理器通过所述数据接口读取存储器上存储的指令,执行上述第一方面、第二方面、第三方面以及第四方面中的任意一个方面中的方法。
可选地,作为一种实现方式,所述芯片还可以包括存储器,所述存储器中存储有指令,所述处理器用于执行所述存储器上存储的指令,当所述指令被执行时,所述处理器用于执行第一方面、第二方面、第三方面以及第四方面中的任意一个方面中的方法。
上述芯片具体可以是现场可编程门阵列FPGA或者专用集成电路ASIC。
应理解,上述任意一个方面中的方法具体可以是指该任意一个方面的方法以及该任意一个方面中的任意一种实现方式中的方法。
附图说明
图1是扫地机器人的示意图;
图2是本申请实施例的更新移动机器人工作地图的方法的示意性流程图;
图3是环境探测地图的示意图;
图4是滤波处理后的环境探测地图的示意图;
图5是环境布局地图的示意图;
图6是获取M张环境探测地图的过程的示意图;
图7是本申请实施例中获取物体分布信息的过程的示意图;
图8是本申请实施例的更新移动机器人工作地图的方法的示意图;
图9是本申请实施例的更新移动机器人工作地图的方法的异常恢复过程的示意图;
图10是本申请实施例的更新移动机器人工作地图的方法的示意性流程图;
图11是扫地机器人在清扫任务的过程中对工作地图进行更新的过程的示意图;
图12是本申请实施例的更新移动机器人工作地图的装置的示意性框图;
图13是本申请实施例的更新移动机器人工作地图的装置的示意性框图;
图14是本申请实施例的移动机器人的示意性框图;
图15是本申请实施例的移动机器人的示意性框图;
图16是本申请实施例的更新移动机器人工作地图的装置的示意性框图;
图17是本申请实施例的移动机器人示意性框图。
具体实施方式
下面将结合附图,对本申请中的技术方案进行描述。
本申请中的移动机器人可以是能够在室内环境(例如,家庭,购物中心以及工厂车间 等环境)中进行移动并且执行一定任务的机器人。具体地,该移动机器人可以包括扫地机器人、搬运机器人等等。
例如,图1示出了一种常见的扫地机器人,该扫地机器人可以通过充电座进行充电。
移动机器人在执行任务时一般先获取相应的工作场所下的物体分布情况,也就是先获取该工作场所下的环境布局地图,然后再根据环境布局地图来执行具体的任务。但是由于同一工作场所下的物体分布情况可能会发生变化,因此,为了使得移动机器人更好地执行工作任务,需要对移动机器人的工作地图进行更新。下面结合附图2对本申请实施例的更新移动机器人工作地图的方法进行详细的介绍。
图2是本申请实施例的更新移动机器人工作地图的方法的示意性流程图。图2所示的方法可以由移动机器人的控制装置来执行,该控制装置用于对移动机器人的工作地图进行更新。该控制装置既可以是位于移动机器人内部的控制模块,也可以是位于移动机器人之外的独立设备。
当上述控制装置是位于移动机器人之外的独立设备时,该控制装置可以是一种电子设备,该电子设备具体可以是移动终端(例如,智能手机),平板电脑,笔记本电脑,增强现实/虚拟现实设备以及可穿戴设备等等。
图2所示的方法包括步骤1001至1003,下面分别对这些步骤进行详细的介绍。
1001、获取M张环境探测地图。
上述M张环境探测地图可以是移动机器人在运动过程中探测得到的物体分布信息确定的。
在上述第一工作场所下,移动机器人对周围物体进行探测,得到物体分布信息。
可选地,上述获取M张环境探测地图,包括:控制移动机器人对周围的物体进行探测,得到第一场所下的物体分布信息;根据第一场所下的物体分布信息确定M张环境探测地图。
上述步骤1001中获取到的环境探测地图可以如图3所示。如图3所示,该环境探测地图显示了家庭环境中的物体的轮廓线或者边界线,该环境探测地图是根据移动机器人在家庭环境中对周围物体进行探测时得到的物体分布信息确定的。
可选地,上述M的数值是预先设置的。
上述M的具体数值既可以由厂家在出厂前进行设置,也可以是在由用户在移动机器人探测之前进行设置。
除了预先设置之外,上述M的数值也可以通过环境探测地图数量配置信息来灵活设置。
可选地,在上述步骤1001之前,图1所示的方法还包括:获取环境探测地图数量配置信息,该环境探测地图数量配置信息包括M的数值;根据环境探测地图数量配置信息确定M的数值。
本申请中,通过环境探测地图数量配置信息能够灵活的实现对M的设置,便于根据需要获取相应数量的环境探测地图。
上述环境探测地图数量配置信息可以是用户输入的信息,例如,当上述方法由上述控制装置执行时,用户可以通过控制装置的控制界面输入上述环境探测地图数量配置信息。该环境探测地图数量配置信息包括M的具体数值。
另外,上述M的数值还可以通过环境探测地图数量修改信息进行调整。具体地,用户可以通过向控制装置输入环境探测地图数量修改信息,控制装置在接收到该环境探测地图数量修改信息之后,能够根据该环境探测地图数量修改信息对M的数值进行调整。
可选地,上述M张境探测地图是根据移动机器人在第一工作场所下执行工作任务时探测得到的物体分布信息确定的。
本申请中,通过移动机器人执行工作任务时探测得到的物体分布信息来确定M张环境探测信息,能够在移动机器人执行其他任务的同时获取到物体分布信息,可以提高工作效率。
可选地,上述M张环境探测地图分别是根据M个物体分布信息确定的,该M个物体分布信息分别是移动机器人在第一工作场所下执行M次工作任务时对周围环境进行探测得到的物体分布信息。
也就是说,对于上述M张环境探测地图来说,其中的任意一个环境探测地图都是根据移动机器人执行对应的一次工作任务时探测得到的物体分布信息确定的。
例如,M张环境探测地图中的第i张环境探测地图可以是移动机器人在执行第j次工作任务时对周围环境进行探测得到的物体分布信息确定的,其中,i和j均为正整数,并且1≤i≤M,1≤j≤M,i和j既可以相同,也可以不同。
本申请中,由于M张环境探测地图中的每张环境探测地图都是根据移动机器人在第一工作场所下执行一次工作任务时获取到的物体分布信息确定的,使得每张环境探测地图都能够尽可能全面的反映第一场所下的物体分布情况,进而使得最终得到的更新后的环境布局地图能够较为全面的反映第一场所的物体分布情况。
可选地,上述获取M张环境探测地图,包括:根据所述移动机器人在第一工作场所下执行工作任务时对周围环境进行探测得到的物体分布信息得到环境探测地图,直到得到M张环境探测地图。
也就是说,在本申请中,可以逐个获取环境探测地图,直到环境探测地图的数量达到M。
在本申请中,可以对获取到的环境探测地图的数量进行实时统计,当环境探测地图的数量达到M时,就可以根据该M张环境探测地图确定环境融合地图。
应理解,当根据上述M张环境探测地图确定环境融合地图时,还可以继续进行环境探测地图的获取,当再次获取到M张环境探测地图时,再次根据获取到的M张环境探测地图确定环境融合地图。
在本申请中,移动机器人可以通过自身的传感器或者探测器对周围的环境进行探测,以得到周围的物体分布信息。
上述传感器或者探测器具体可以包括摄像头(具体可以是深度摄像头)、红外传感器、测距雷达以及超声传感器中的至少一种。
在图2所示的方法中,还可以在更新地图的过程中,创建一个环境探测地图集合C(以下简称集合C),该集合C用于保存获取到的环境探测地图,当该集合C中的环境探测地图的中的数量达到M时,可以将该集合C中的M张环境探测地图都写入到环境布局地图更新集合D(以下简称集合D)中,并将该集合C中的保存的环境探测地图清空(可以将接下来再获得的环境探测地图保存在集合C中),这样集合D中的M张环境探测地图 集合用于确定环境融合地图。
进一步的,还可以淘汰集合D中异常的环境探测地图,得到环境布局地图更新集合Ds,接下来,再利用该集合Ds中的环境探测地图来确定环境融合地图。
1002、对M张环境探测地图进行融合,以获取当前融合得到的环境融合地图。
在上述步骤1002中对M张环境探测地图进行融合(具体可以是叠加)得到环境融合地图时,既可以直接对上述M张环境探测地图直接进行融合,也可以是对上述M张环境探测地图做了一定的预处理(例如,滤波处理,降噪处理)之后再进行融合,也可以是对上述M张环境探测地图做了一定筛选之后,对筛选后得到的环境探测地图进行融合。
步骤1002中确定环境融合地图时可以有两种不同的方式,下面对这两种方式分别进行详细的介绍。
第一种方式:直接将M张环境探测地图进行融合,以获取当前融合得到的环境融合地图。
在第一种方式下,在对M张环境探测地图进行融合时,具体可以是将M张环境探测地图的像素值进行平均处理,将得到的平均像素值作为环境融合地图的像素值。
在第一种方式下,在获取到M张环境探测地图之后,通过直接对该M张环境探测地图进行融合,能够较为方便快速的确定环境融合地图。
第二种方式:从M张环境探测地图中确定出一致度较好的环境探测地图再进行融合。
在第二种方式下,对M张环境探测地图进行融合,以获取当前融合得到的环境融合地图,具体包括:从M张环境探测地图中确定出N张环境探测地图;对该N张环境探测地图进行融合,以获取当前融合得到的环境融合地图。
其中,上述N张环境探测地图中的任意两张环境探测地图之间的一致度大于或者等于第一阈值,N为小于或者等于M的正整数。当N=M时,说明上述M张环境探测地图中的任意两个环境探测地图之间的一致度均满足要求。
上述第一阈值可以是预设的阈值,该第一阈值的大小可以根据需要来设置。当机器人工作时对地图的精准度要求较高时,可以将第一阈值设置成一个较大的数值,而当机器人工作时对地图的精准度要求较低时,可以将第一阈值设置成一个较小的数值。例如,上述第一阈值可以设置为0.7。
另外,两张环境探测地图之间的一致度可以用两张环境探测地图之间栅格的匹配度来表示,如果两张环境探测地图的栅格匹配度越高,那么,这两种环境探测地图的之间的一致度也就越高。
假设,存在两张环境探测地图Mi与Mj,如果S(Mi)>S(Mj)(地图Mi的面积大于Mj的面积),则地图Mi与Mj之间的一致度c可以用下列公式(1)和公式(2)来计算。
Figure PCTCN2020085231-appb-000001
Figure PCTCN2020085231-appb-000002
其中,
Figure PCTCN2020085231-appb-000003
是地图Mj上的栅格点,m和n是栅格点
Figure PCTCN2020085231-appb-000004
的索引,r是地图Mj上的栅格点
Figure PCTCN2020085231-appb-000005
对应在Mi地图上栅格点的邻域半径,这里的r取值可以为2。
因此,可以通过上述公式(1)和(2)来计算M张探测地图中的任意两张环境探测地图之间的一致度,从中选择出一致度满足要求(可以是一致度大于某个阈值)的环境探测地图。
对上述N张环境探测地图的融合过程与上述对M张环境探测地图的融合过程类似,这里不再详细描述。
在第二种方式下,通过从M张环境探测地图中选择出一致度较好的环境探测地图进行融合,一般能够得到更准确的环境融合地图。
第三种方式:对M张环境探测地图先进行预处理,然后再对预处理后的M张环境探测地图进行融合,以获取当前融合得到的环境融合地图。
在第三种方式下,对M张环境探测地图进行融合,以获取当前融合得到的环境融合地图,具体包括:对M张环境探测地图进行预处理,得到预处理后的M张环境探测地图;对预处理后的M张环境探测地图进行融合,以获取当前融合得到的环境融合地图。
其中,对M张环境探测地图进行预处理可以是在M张探测地图中的一张环境探测地图中出现与其它环境探测地图的图像内容差异较大的图像区域时,将该图像区域内的图像内容删除。
例如,M张环境探测地图中包括第一环境探测地图,该第一环境探测地图的A区域的图像内容与M张环境探测地图中的其它环境探测地图(M张环境探测地图中除了第一环境探测地图之外的环境探测地图),那么,可以将第一环境探测地图中的A区域的图像内容删掉或者清除掉,得到预处理之后的第一环境探测地图。
上述预处理后的M张环境探测地图中的任意两张环境探测地图之间的一致度可以大于或者等于第三阈值,该第三阈值可以是预设的阈值,该第三阈值的大小可以根据实际需要来设置。当机器人工作时对地图的精准度要求较高时,可以将第三阈值设置成一个较大的数值,而当机器人工作时对地图的精准度要求较低时,可以将第三阈值设置成一个较小的数值。
应理解,第三种方式下的第三阈值与第二种方式下的第一阈值既可以相同,也可以不同。
在第三种方式下,通过剔除掉M张环境探测地图中与其它环境探测地图的差异较大的图像内容,再进行图像的融合,一般能够得到更准确的环境融合地图。
另外,在上述三种方式中获取环境融合地图之前,可以先对M张环境布局地图进行滤波处理或者先降噪处理,然后再进行后续的处理。
也就是说,在上述步骤1002中,可以在根据M张环境探测地图确定环境融合地图之前,先对图像滤波,然后根据滤波处理后的环境探测地图来确定环境融合地图。
可选地,上述根据M张环境探测地图确定环境融合地图,包括:对该M张环境探测地图进行滤波处理,得到M张滤波处理后的环境探测地图;对M张滤波处理后的环境探测地图进行融合,以获取当前融合得到的环境融合地图。
这里的滤波处理具体可以是形态滤波,以提取图像的线特征。
通过对环境探测地图进行滤波处理,能够去除掉环境中小的物体的干扰,保留环境的主要布局。
例如,滤波处理前的环境探测地图可以如图3所示,对图3所示的环境探测地图进行滤波处理,可以得到图4所示的环境探测地图。与图3相比,图4中的环境探测地图中的图像噪音更少,能够显示环境的主要布局。
假设,在步骤1002中,集合C中保存了M张环境探测地图,并将该M张环境探测地图写入到集合D中,接下来,又将该集合D中的异常地图剔除掉,得到集合Ds,该集合Ds中有N张环境探测地图。那么,在根据该集合Ds中的N张环境探测地图确定环境融合地图时,可以按照下列步骤进行。
步骤A:根据集合Ds得到环境融合地图的地图边界(x min,x max)和(y min,y max)。
具体地,上述集合Ds中的N张环境探测地图均是位于同一坐标系中的地图,假设,Ds集合中的地图左顶点坐标为
Figure PCTCN2020085231-appb-000006
右底点坐标为
Figure PCTCN2020085231-appb-000007
其中,i={1,2,..,n}。那么,可以根据公式(3)和公式(4)来确定环境融合地图的地图边界。
Figure PCTCN2020085231-appb-000008
Figure PCTCN2020085231-appb-000009
步骤B:根据集合Ds确定环境融合地图。
具体地,在步骤B中可以先根据环境融合地图的地图边界来确定环境融合地图的尺寸,然后创建以(x min,y min)为图像原点(图像原点位于环境融合地图的左上角位置)的环境融合地图,并将集合Ds中的N张探测地图中的栅格点依次投影到该环境融合地图中,环境融合地图的像素点的取值为集合Ds中的N张探测地图的像素点的像素值的集合。
例如,可以根据公式(5)来确定环境融合地图。
Figure PCTCN2020085231-appb-000010
其中,
Figure PCTCN2020085231-appb-000011
为地图M_pre中的像素点(i,j)的像素值,
Figure PCTCN2020085231-appb-000012
为地图M_k中对应的像素值。根据公式(5)可以得到环境融合地图,该环境融合地图可以记为Mpre,通过对Mpre进行滤波处理,可以得到滤波处理后的环境融合地图,记为Mobs。
1003、对环境融合地图的像素值和环境布局地图的像素值进行加权处理,得到更新后的环境布局地图。
其中,上述环境布局地图是所述移动机器人当前保存的所述第一工作场所的环境布局地图。当完成了环境布局地图的更新,得到更新后的环境布局地图之后,移动机器人可以将原来的环境布局地图替换掉,保存更新后的环境布局地图。直到下次更新地图时再根据探测到的环境探测地图对环境布局地图进行再次更新。
上述步骤1003中的环境布局地图或者更新后的环境布局地图可以如图5所示,图5中的环境布局地图显示了一个家庭环境中的物体布局情况。
本申请中,通过移动机器人工作时获取到的多张探测地图对移动机器人保存的环境布局地图进行更新,使得更新后的环境布局地图能够反映出更详细的环境布局情况。便于移动机器人后续根据该更新后的环境布局地图更好地执行工作任务。
在上述步骤1003中对环境融合地图的像素值和环境布局地图的像素值进行加权处理时,当前融合得到的环境融合地图的像素值对应的权重为第一权重,当前保存的环境布局地图的像素值对应的权重为第二权重,其中,第一权重和第二权重的大小可以根据移动机器人的地图更新需求来确定。
移动机器人的地图更新需求可以是移动机器人(执行任务时)对环境布局地图的更新的快慢(或者频率)的要求,也可以是移动机器人(执行任务时)对环境布局地图的更新幅度的要求。
一般来说,当移动机器人要求环境布局地图快速更新时,可以将第一权重设置成一个较大的数值,而将第二权重设置成一个较小的数值。反之,当移动机器人并不要求环境布局地图快速更新时,可以将第一权重设置成一个较小的数值,而将第二权重设置成一个较大的数值。
另外,当移动机器人要求环境布局地图更新幅度较大时,可以将第一权重设置成一个较大的数值,而将第二权重设置成一个较小的数值。反之,移动机器人要求环境布局地图更新幅度较小时,可以将第一权重设置成一个较小的数值,而将第二权重设置成一个较大的数值。
本申请中,由于第一权重和第二权重是根据移动机器人的地图更新需求确定的,因此,可以根据移动机器人的地图更新需求来灵活更新环境布局地图。
可选地,上述第一权重和第二权重的大小是根据移动机器人的地图更新需求确定的,包括:第一权重与移动机器人需要的环境布局地图更新频率为正相关关系,第二权重与移动机器人需要的环境布局地图更新频率为反相关关系。
也就是说,当移动机器人(执行任务时)要求的地图更新频率较高时,可以将第一权重设置成一个较大的数值,而将第二权重设置成一个较小的数值(例如,第一权重值设置成0.7,第二权重值设置为0.3)。反之,当移动机器人(执行任务时)要求的地图更新频率较低时,可以将第一权重设置成一个较小的数值,而将第二权重设置成一个较大的数值(例如,第一权重值设置成0.3,第二权重值设置为0.7)。
可选地,上述第一权重和第二权重的大小是根据移动机器人的地图更新需求确定的,包括:第一权重与移动机器人需要的环境布局地图更新幅度为正相关关系,第二权重与移动机器人需要的环境布局地图更新幅度为反相关关系。
另外,上述第一权重和第二权重也可以是预先设置好的。
可选地,上述第一权重和第二权重由用户来设置。例如,用户可以通过控制界面来更改第一权重和第二权重的数值。
应理解,用户还可以通过设置其他参数来间接的实现对第一权重和第二权重的设置,例如,用户可以通过设置地图更新频率的参数来间接的实现对第一权重和第二权重的设置。
本申请中,由于第一权重和第二权重是根据移动机器人的地图更新需求确定的,因此,可以根据移动机器人的地图更新需求来灵活更新环境布局地图。
上述步骤1003中的环境布局地图可以记为Mr,步骤1003中可以根据公式(6)对地图Mr进行更新,得到更新后的环境布局地图。
M r_new=α*M r_old+(1-α)*M obs         (6)
在上述公式(6)中,M r_old表示环境布局地图,M r_new表示更新后的环境布局地图,M obs表示滤波处理后的环境融合地图。α为M r_old的权重(相当于上文中的第二权重),(1-α)为M obs的权重(相当于上文中的第一权重)。在本申请中,α的取值具体可以为0.7,这样在融合时,M r_old的权重为0.7,M obs的权重为0.3。
在根据上述公式(6)得到M r_new之后,可以将M r_new中的像素点的值与栅格阈值比较,如果像素值大于阈值(在本申请中阈值可以设置为180),则该点是环境的被占据点,否则认为是自由点。
在本申请中,环境布局地图实际可以视为一种概率地图,环境布局地图的像素值的取值范围在0-255之间,其中,0表示栅格点/像素点被占据(有障碍物),255表示栅格点空闲(没有障碍物),其他数值表示被空闲的概率(数值越接近255说明空闲的概率越大)。
在本申请中,M张环境探测地图中的每张环境探测地图可以是根据移动机器人执行一次任务时探测得到的物体分布信息来确定的。为了更详细的了解根据一次任务时探测到的物体分布信息来确定一张环境探测地图的过程,下面以图6为例进行详细说明。
图6是获取M张环境探测地图的过程的示意图。图6所示的过程包括步骤1001a至1001f,这些步骤可以视为上述步骤1001的细化或者具体实现,下面对这些步骤进行详细的介绍。
1001a、开始。
表示开始获取上述M张环境探测地图。
1001b、对周围环境进行探测,得到物体分布信息。
在步骤1001b中,移动机器人可以在控制装置(该控制装置既可以是一个单独的控制设备,也可以位于移动机器人内容的控制模块)的控制下利用自身的探测器或者传感器(该探测器或者传感器具体可以是摄像头、红外传感器、测距雷达以及超声传感器)对周围环境(或者周围物体)进行探测,得到反映周围物体分布情况的物体分布信息。
1001c、确定当前任务是否执行完毕。
步骤1001c中可以由移动机器人自身进行判断当前任务是否执行完毕。例如,以扫地机器人为例,判断当前任务是否执行完毕,具体可以判断当前清扫的面积是否达到预设要求,或者清扫的时间是否已经达到预设要求,如果当前已经清扫的面积达到预设要求,或者当前已经清扫的时间达到预设要求,那么,就可以确认当前任务执行完毕。
如果步骤1001c中确定当前任务执行完毕,那么,执行步骤1001d,以确定一张环境探测地图;如果步骤1001c中确定当前任务未执行完毕,那么,可以重新执行步骤1001b。
1001d、根据当前任务执行过程中获取到的物体分布信息确定一张环境探测地图。
应理解,在步骤1001d中,移动机器人可以将执行当前任务过程中获取到的物体分布信息发送给控制装置,由控制装置(该控制装置既可以是一个单独的控制设备,也可以位于移动机器人内容的控制模块)根据该物体分布信息来确定一张环境探测地图。
1001e、确定环境探测地图的数量是否达到M。
当步骤1001e中确定出环境探测地图的数量达到M时执行步骤1001f,也就是获取到 了M张环境探测地图;当步骤1001e中确定出环境探测地图的数量还未达到M时,重新执行步骤1001b。
在步骤1001e中,控制装置可以对获取到的环境探测地图的数量进行统计,当环境探测地图的数量达不到M时,继续控制移动机器人获取物体分布信息,直到获取到M张环境探测地图。
1001f、得到M张环境探测地图。
应理解,在图6所示的过程中,可以是根据每次执行任务过程中获取到的(全部)物体分布信息来确定一张环境探测地图。实际上,在本申请中,也可以根据每次任务执行过程中获取到的部分物体分布信息来确定一张环境探测地图。也就是说,可以根据每次执行任务过程中获取到的全部物体分布信息来确定一张或者一张以上的环境探测地图。
另外,在本申请中,一张环境探测地图还可以是根据移动机器人执行多次任务过程中获取到的(全部)物体分布信息确定的。
也就是说,本申请中,一张环境探测地图既可以是根据移动机器人一次执行工作任务过程中获取到的全部物体分布信息来确定,也可以是根据移动机器人一次执行工作任务过程中获取到的部分物体分布信息来确定,还可以是根据移动机器人多次(两次或者两次以上)执行工作任务过程中获取到的部分物体分布信息来确定。
在本申请中,在获取M张环境探测地图的过程中,为了保证获取到的每张环境探测地图的准确性,防止出现由于某次定位异常或者探测异常而导致得到的环境探测地图不够准确。可以根据移动机器人在某段预设时间内获取到物体分布信息得到该预设时间对应的实时环境探测地图,然后将该实时环境探测地图与移动机器人当前保存的环境布局地图进行比较,如果实时环境探测地图与当前保存的环境布局地图的差异较大的话,可能是由于定位故障或者异常使得当前一段时间内获取到的物体分布信息不够准确而引起的。在这种情况下,可以将当前一段时间内获取到的物体分布信息舍弃掉,从而保证获取到的M张环境探测地图的准确性。
也就是说,在获取上述M张环境探测地图的过程中,可以将很可能由于定位异常而获取到的物体信息排除掉,从而使得获取到的物体分布信息更加准确。
图7是本申请实施例中获取物体分布信息的过程的示意图。图7所示的过程包括步骤2001至2007,这些步骤可以发生在步骤1001中获取M张环境探测地图的过程中。下面对步骤2001至2007进行详细的介绍。
2001、开始。
步骤2001表示开始获取当前探测地图。
2002、获取当前物体分布信息。
上述当前物体分布信息是移动机器人在预设间隔内或者预设距离内探测到的物体分布信息,该预设间隔是移动机器人到达当前探测点之前的一段时间间隔,该预设距离是移动机器人到达当前探测点之前移动的一段距离。
上述预设间隔可以是人为设置的一段时间间隔,上述预设距离可以是人为设置的一段距离。上述预设间隔和预设距离可以根据具体的需求来灵活设置。
2003、根据当前物体分布信息确定当前环境探测地图。
步骤2003中的到当前环境探测地图具体可以通过以下两种方式获得。
方式A:根据移动机器人预设一段时间内获取到的物体分布信息确定。
例如,在方式A下,可以根据移动机器人每工作5分钟时间获取到的物体分布信息来确定当前环境探测地图。
应理解,这里仅以5分钟为例进行说明,在本申请中,还可以以其他任意时间长度内获取到的物体分布信息来确定当前环境探测地图。
方式B:根据移动机器人在预设距离内获取到的物体分布信息确定。
例如,在方式B下,可以根据移动机器人每移动5米的距离获取的物体分布信息来确定当前环境探测地图。
应理解,这里仅以距离为5米为例进行说明,在本申请中,还可以以移动机器人移动其他任意距离获得的物体分布信息来确定当前环境探测地图。
2004、确定当前环境探测地图与环境布局地图的一致度是否小于第二阈值。
当步骤2004中确定出当前环境探测地图与环境布局地图的一致度小于第二阈值时,说明当前环境探测地图与环境布局地图之间的差异可能比较大,很可能是由于定位不准、传感器故障或者其他模块的故障而导致获取到的物体分布信息不够准确,进而导致得到的当前环境探测地图不够准确,因此,需要重新获取环境探测地图,也就是执行步骤2005。
当骤2004中确定出当前环境探测地图与环境布局地图的一致度大于或者等于第二阈值时,说明当前环境探测地图与环境布局地图之间的差异可能比较小,控制移动机器人继续进行探测即可,也就是重新执行步骤2002。
上述第二阈值可以是预先设置的一个阈值,第二阈值的大小可以与最终要求得到的环境布局地图的精度有关,当最终要求得到的环境布局地图的精度较高时,可以将第二阈值设置成一个较大的数值,而当最终要求得到的环境布局地图的精度较低时,可以将第二阈值设置成一个较小的数值。这里的第二阈值具体可以设置为0.6。
2005、确定移动机器人对周围环境进行探测时发生异常。
上述移动机器人对周围环境进行探测时发生异常,可以是指移动机器人在对周围环境探测过程中由于某种故障导致无法探测得到准确的物体分布信息。具体地,移动机器人可能会是由于定位故障、传感器故障或者移动机器人内部某些处理模块的故障,导致移动机器人无法探测得到准确的物体分布信息。
2006、控制移动机器人进行异常恢复。
上述步骤2006中控制移动机器人进行异常恢复的方式有多种,下面对常用的异常恢复方式进行介绍。
异常恢复方式一:执行回退操作,以重新获得物体分布信息。
在异常恢复方式一中,可以控制移动机器人从当前探测点回退到第一探测点;控制移动机器人从第一探测点开始重新对周围环境进行探测,以重新获取物体分布信息。
其中,上述第一探测点与当前探测点之间的距离为预设距离,该预设距离可以是人为设置的一个距离,该距离的具体数值可以根据经验灵活设置。
通过重新开始对周围环境进行探测,能够重新获取物体分布信息,便于后续根据获取到的物体分布信息来确定环境探测地图。
异常恢复方式二:控制移动机器人对操作系统进行复位。
上述控制移动机器人对操作系统进行复位相当于控制移动机器人进行系统的重启(类 似于电脑的重启)。
异常恢复方式三:控制移动机器人对移动机器人的传感器进行重启。
上述控制移动机器人对移动机器人的传感器进行重启,具体可以是指控制移动机器人将传感器的端口关闭后再重新打开。
上述移动机器人的传感器可以包括激光雷达,编码器,陀螺仪,以及超声传感器,红外传感器等等。
在上述步骤2006之后,图6所示的方法还可以包括步骤2007,通过执行步骤2007实现对不准确的物体分布信息的清除。
2007、控制移动机器人清除当前物体分布信息。
在上述步骤2007之后,还可以继续执行步骤2002,以获取当前物体分布信息。
通过控制移动机器人清除当前物体分布信息,能够将可能由于定位异常或者传感器故障而获取到的不太准确的物体分布信息清除掉,能够尽可能的保证移动机器人获取到的物体分布信息的准确性。
图7所示的过程可以发生在移动机器人获取物体分布信息的过程,通过图7所示的过程,能够尽可能的消除由于定位异常而导致获取到的物体分布信息不同,进而能够使得获取到的环境探测地图能够尽可能的准确。
本申请中,在移动机器人探测得到的环境探测地图与环境布局地图的差异较大时,可以执行回退操作,重新进行探测操作,以尽可能的保证环境探测地图的准确性。
可选地,上述M张环境探测地图中栅格点的坐标值均是位于同一坐标系下的坐标值。
当上述M个环境探测地图是位于同一坐标系下的地图时,通过M个环境探测地图能够融合得到更精准的环境布局地图。
可选地,上述M张环境探测地图中栅格点的坐标值是位于参考坐标系下的坐标值,该参考坐标系的原点位于以下位置中的任意一个位置:
所述移动机器人结束任务后的停靠位置;
所述移动机器人的充电座所在的位置;
与所述移动机器人匹配的垃圾中转站所在的位置。
例如,如图1所示,扫地机器人附近为充电座,可以选择该充电座的位置作为参考坐标系的原点。
在获取上述M张环境探测地图时,既可以在获取M张环境探测地图的过程中直接将M张环境探测地图设置在参考坐标系中,这样得到的M张M张环境探测地图中栅格点的坐标值均是位于同一坐标系(参考坐标系)中的坐标值。
此外,还可以在获取M张环境探测地图时,对于每一张环境探测地图,可以将该环境探测地图设置在以对应的工作任务开始执行时的起点为坐标原点的坐标系中,后续再将这M张环境探测地图转化到同一坐标系(参考坐标系)中。
可选地,上述获取M张环境探测地图,包括:获取所述M张环境探测地图中的第一环境探测地图,所述第一环境探测地图中的栅格点的坐标值是位于第一坐标系下的坐标值;将所述第一环境探测地图中的栅格点的坐标值转换到参考坐标系下的坐标值。
上述第一环境探测地图可以是上述M张环境探测地图中的任意一张环境探测地图,该第一环境探测地图可以是根据移动机器人执行第i(1≤i≤M,且i为整数)次工作任务时 探测到的物体分布信息确定的,上述第一坐标系的原点可以是根据移动机器人执行该第i次任务时的起始位置确定的。
具体地,上述第一坐标系的坐标原点可以是移动机器人执行该第i次任务时的起始点(例如,可以是移动机器人执行第i次任务的起始位置的中心点)。
在本申请中,既可以通过控制装置与移动机器人之间的交互实现移动机器人工作地图的更新,也可以由移动机器人单独实现对工作地图的更新。下面对这些实现方式进行详细的介绍。
图8是本申请实施例的更新移动机器人工作地图的方法的示意图。
图8所示的方法可以由控制装置和移动机器人来共同执行,图8所示的方法包括步骤10010至10050,下面对这些步骤分别进行详细的介绍。
10010、控制装置从移动机器人获取M张环境探测地图。
在步骤10010中,移动机器人可以向控制装置发送该M张环境探测地图,控制装置接收该M张环境探测地图。
应理解,在上述步骤10010中,既可以是在移动机器人生成了M张环境探测地图之后,控制装置一次性的从移动机器人获取M张环境探测地图(移动机器人发送M张环境探测地图),也可以是在移动机器人每生成一张环境探测地图之后,控制装置就从该移动机器人获取一张环境探测地图(移动机器人每生成一张环境探测地图之后就发送一张环境探测地图)。
10020、控制装置对M张环境探测地图进行融合,以获得当前融合得到的环境融合地图。
在步骤10020中,控制对M张环境探测地图进行融合(具体可以是叠加)得到环境融合地图时,既可以直接对上述M张环境探测地图直接进行融合,也可以是对上述M张环境探测地图做了一定的预处理(例如,滤波处理,降噪处理)之后再进行融合,也可以是对上述M张环境探测地图做了一定筛选之后,对筛选后得到的环境探测地图进行融合。
可选地,上述步骤10020具体包括:对M张环境探测地图进行滤波处理,得到M张滤波处理后的环境探测地图;对该M张滤波处理后的环境探测地图进行融合,得到环境融合地图。
通过对环境探测地图进行滤波处理,能够去除掉环境中小的物体的干扰,保留环境的主要布局。在对上述环境探测地图进行滤波处理时,具体可以采用形态滤波,以提取环境探测地图的线特征。
10030、控制装置从移动机器人获取移动机器人当前保存的环境布局地图。
在步骤10030中,移动机器人可以将当前保存的环境布局地图发送给控制装置,控制装置接收移动机器人当前保存的环境布局地图。
10040、控制装置对当前融合得到的环境融合地图的像素值和当前保存的环境布局地图的像素值进行加权处理,得到更新后的环境布局地图。
上述步骤10040得到更新后的环境布局地图的具体过程上文中的步骤1003表示的获取更新后的环境布局地图的具体过程可以相同,这里不再详细描述。
10050、控制装置将更新后的环境布局地图发送给移动机器人。
本申请中,通过从移动机器人获取移动机器人工作时得到的多张探测地图,能够对移 动机器人当前保存的环境布局地图进行更新,使得更新后的环境布局地图能够反映出更详细的环境布局情况。便于移动机器人后续根据该更新后的环境布局地图更好地执行工作任务。
移动机器人接收更新后的环境布局地图之后,可以根据该更新后的环境布局地图来执行任务。由于上述过程中对环境布局地图进行了更新,使得更新后的环境布局地图更加准确,使得移动机器人后续能够根据该更新后的环境布局地图更好地进行工作。
在本申请中,当移动机器人由于传感器故障(例如,传感器定位故障)导致移动机器人获取到的物体分布信息不太准确时,可以通过控制装置来控制移动机器人进行异常恢复,以使得获取到的物体分布信息具有较高的准确性。下面结合图9对本申请的方法中的异常恢复过程进行详细的介绍。
图9是本申请实施例的更新移动机器人工作地图的方法中的异常恢复过程的示意图。
图9所示的异常恢复过程包括步骤20010至步骤20050,下面对这些步骤进行详细的介绍。
20010、控制装置从移动机器人获取当前物体分布信息。
当前物体分布信息是移动机器人在预设间隔内或者预设距离内探测到的物体分布信息,该预设间隔是移动机器人到达当前探测点之前的一段时间间隔,该预设距离是移动机器人到达当前探测点之前移动的一段距离。
在步骤20010中,移动机器人可以将当前物体分布信息发送给控制装置,控制装置接收到当前物体分布信息之后,可以执行步骤20020。
移动机器人可以周期性的向控制装置发送当前物体分布信息。
20020、控制装置根据当前物体分布信息确定当前环境探测地图。
20030、在当前环境探测地图与当前保存的环境布局地图的一致度小于第二阈值的情况下,控制装置确定移动机器人在对周围环境进行探测时发生异常。
20040、控制装置向移动机器人发送异常恢复指令。
20050、移动机器人响应于异常恢复指令,进行异常恢复。
上述异常恢复指令用于指示移动机器人进行异常恢复,在接收到该异常恢复指令之后,移动机器人响应于该异常恢复指令进行异常恢复。
上述异常恢复指令可以包括多种具体的操作指令,例如,回退指令、重新探测指令和重启指令等等。
可选地,控制装置向移动机器人发送回退指令。
上述回退指令用于指示移动机器人从当前探测点回退到第一探测点,该第一探测点与当前探测点之间的距离为预设距离。当移动机器人接收到回退指令之后,响应于该回退指令,从当前探测点回退到第一探测点。
可选地,控制装置向移动机器人发送重新探测指令。
上述重新探测指令用于指示移动机器人从第一探测点开始重新对周围环境进行探测,以重新获取物体分布信息。当移动机器人接收到重新探测指令之后,响应于该重新探测指令从第一探测点开始重新对周围环境进行探测,以重新获取物体分布信息。
可选地,控制装置向移动机器人发送重启指令。
上述重启指令用于指示移动机器人进行重启,并在重启后重新对周围环境进行探测。 当移动机器人接收到该重启指令之后,移动机器人响应于该重启指令,进行重启,并在重启之后重新对周围环境进行探测。
上述通过重启指令既可以指示移动机器人对操作系统进行复位,也可以指示移动机器人对相应的传感器进行重启。
其中,对操作系统的重启类似于对电脑系统的重启,传感器的重启具体可以将传感器的端口关闭后再重新打开。
上述移动机器人的传感器可以包括激光雷达,编码器,陀螺仪,以及超声传感器,红外传感器等等。
可选地,控制装置向移动机器人发送清除指令。
上述清除指令用于清除当前物体分布信息,当移动机器人接收到清除指令之后,响应于该清除指令,将当前物体分布信息清除。
通过控制移动机器人清除当前物体分布信息,能够将可能由于定位异常或者传感器故障而获取到的不太准确的物体分布信息清除掉,能够尽可能的保证移动机器人获取到的物体分布信息的准确性。
应理解,上文在介绍图1所示的方法时关于异常恢复的内容同样也适用于图9所示的异常恢复过程,为了避免不必要的重复,这里不再重复描述。
在本申请中,也可以由移动机器人本身实现地图的更新,下面结合图10对这种情况进行详细的介绍。
图10是本申请实施例的更新移动机器人工作地图的方法的示意性流程图。
图10所示的方法可以由移动机器人自身来执行,图10所示的方法包括步骤30010至步骤30050,下面对这些步骤进行详细的介绍。
30010、在运动过程中对周围环境进行探测,得到环境布局信息。
30020、根据一次探测周期探测得到的环境布局信息确定一张环境探测地图。
上述第一探测周期可以与一个工作周期的时间对应,其中,一个工作周期可以是指移动机器人完成一次工作任务。
30030、确定是否获取到M张环境探测地图。
上述M张环境探测地图是根据移动机器人在运动过程中探测得到的物体分布信息确定的,M为大于1的整数。
30040、对M张环境探测地图进行融合,以获得当前融合得到的环境融合地图。
30050、对当前融合得到的环境融合地图的像素值和当前保存的环境布局地图的像素值进行加权处理,得到更新后的环境布局地图。
当前保存的环境布局地图是对上次融合得到的环境融合地图和上次保存的环境布局地图进行加权处理得到的,而上次融合得到的环境融合地图是根据上次获取到的M张环境探测地图进行融合得到的。
本申请中,通过获取移动机器人工作时得到的多张探测地图,能够对移动机器人当前保存的环境布局地图进行更新,使得更新后的环境布局地图能够反映出更详细的环境布局情况。便于移动机器人后续根据该更新后的环境布局地图更好地执行工作任务。
上述图10所示的方法与上文中图2所示的方法的过程相似,图2所示的方法的执行主体可以是移动机器人的控制装置来,图10所示的方法的执行主体可以是移动机器人自 身。图2所示的方法中对地图的融合和更新的扩展、解释和说明的内容也适用于图10所示的方法,为了避免不必要的重复,这里不再重复描述。
为了更好地理解本申请实施例的方案,下面结合图11对本申请实施例的更新移动机器人工作地图的方法的具体过程进行详细的介绍。
图11示出了扫地机器人在清扫任务的过程中对工作地图进行更新的过程。
3001、开始。
步骤3001表示开始执行清扫任务。
在步骤3001中,当扫地机器人开始执行清扫任务时,扫地机器人处于初始状态。当扫地机器人处于初始状态时,扫地机器人执行清扫任务的次数为0,集合C和集合D均为空。其中,集合C用于存储扫地机器人获取到的环境探测地图,集合D用于存储对环境布局地图进行更新的环境探测地图。
3002、执行清扫任务。
应理解,在扫地机器人执行清扫任务的过程中,需要对周围的物体进行探测,进而获取到周围物体的物体分布信息。
3003、构建坐标系Wi-xy下的地图Mi。
其中,坐标系Wi-xy可以是以第i次执行清扫任务的起始点为坐标原点的坐标系,地图Mi是根据执行第i次清扫任务得到的物体分布信息确定的环境探测地图。
3004、确定是否检测到充电座。
当未检测到充电座时,继续执行步骤3003;当检测到充电座时,执行步骤3005。
3005、将地图Mi的坐标映射到坐标系Bi-xy下。
其中,坐标系Bi-xy是以充电座位置为坐标原点的坐标系,该坐标系可以称为参考坐标系或者一个标准坐标系。
坐标系Bi-xy除了可以是以充电座位置为坐标原点的坐标系之外,还可以是以扫地机器人结束任务后的停靠位置以及与扫地机器人相匹配的垃圾中转站所在的位置中的任意一个位置作为参考原点的坐标系。
上述步骤3003得到的地图Mi位于坐标系Wi-xy下,步骤3005中是将地图Mi的坐标从坐标系Wi-xy下转换到坐标系Bi-xy下。
经过步骤3005相当于将初始构建的地图Mi转换到统一的坐标系中。
具体地,可以根据公式(7)将地图Mi在坐标系Wi-xy下的坐标值转换到地图Mi在坐标系Bi-xy下的坐标值。
Figure PCTCN2020085231-appb-000013
其中,在公式(7)中,
Figure PCTCN2020085231-appb-000014
表示地图Mi中的任意一点在坐标系Wi-xy下的坐标值,
Figure PCTCN2020085231-appb-000015
表示地图Mi中的任意一点对应在坐标系Bi-xy下的坐标值,
Figure PCTCN2020085231-appb-000016
表示转换系数或者转换因子,
Figure PCTCN2020085231-appb-000017
可以根据公式(8)计算得到。
Figure PCTCN2020085231-appb-000018
其中,
Figure PCTCN2020085231-appb-000019
表示充电座(这里选择充电座作为Bi-xy的原点)在坐标系 Wi-xy中坐标值,θ为充电座的姿态角。
3006、在坐标系下Bi-xy继续构建地图Mi。
3007、当清扫任务结束时,对地图Mi进行滤波处理。
3008、将滤波处理后的地图Mi保存在集合C中。
3009、确定扫地机器人的清扫次数i是否大于N。
当扫地机器人的清扫次数i小于或等于N时,继续执行步骤3002,进而继续获取地图;当扫地机器人的清扫次数i大于N时,执行步骤3010。
3010、确定集合C中任意两个地图间的一致性。
3011、将集合C中一致度较好的地图保存在集合D中。
在步骤3011中,可以采用上文中的公式(1)和(2)中的方式来确定两个地图之间的一致性,并将一致度大于预设阈值的附图保存到集合D中,具体过程这里不再详细描述。
3012、根据集合D中地图对地图Mr进行更新。
具体地,在步骤3012中,可以先根据集合D中的地图得到环境融合地图,然后再利用该环境融合地图对环境布局地图Mr进行更新,得到更新后的环境布局地图Mr。
进一步的,在步骤3012中,在得到环境融合地图之前,还可以先淘汰集合D中异常的环境探测地图,得到环境布局地图更新集合Ds,然后再利用该集合Ds中的环境探测地图来确定环境融合地图。
应理解,图11所示的方法中的集合C和集合D可以与上文中的集合C和集合D具有相同的含义。
上文结合附图对本申请实施例的更新移动机器人工作地图的方法进行了详细的介绍,下面结合图12至图17对本申请实施例的更新移动机器人工作地图的相关装置进行介绍,应理解,图12至图17所示的装置能够执行本申请实施例的更新移动机器人工作地图的方法的各个步骤。为了避免不必要的重复,下面在介绍本申请实施例的装置时适当省略重复的描述。
图12是本申请实施例的更新移动机器人工作地图的装置的示意性框图。
图12所示的装置5000包括:获取单元5001和处理单元5002。
图12所示的装置5000用于执行本申请实施例的更新移动机器人工作地图的方法,具体地,装置5000中的获取单元5001可以用于获取M张环境探测地图,处理单元5002用于基于获取单元5001获取到的M张环境探测地图最终得到更新后的环境布局地图。
在本申请中,上述处理单元5002既可以基于获取到的M张环境探测地图最终得到更新后的环境布局地图,也可以实现对移动机器人的控制。也就是说,这里的处理单元5002既可以具有数据处理的功能,也可以具有对移动机器人进行控制的功能。
图13是本申请实施例的更新移动机器人工作地图的装置的示意性框图。
图13所示的装置6000包括:获取单元6001、处理单元6002和收发单元6003。
上述装置6000中的获取单元6001可以用于从移动机器人获取M张环境探测地图,处理单元6002用基于获取单元5001获取到的M张环境探测地图最终得到更新后的环境布局地图,收发单元6003用于将更新后的环境布局地图发送给移动机器人。
图13所示的装置6000可以相当于图8和图9中的控制装置,装置6000可以执行图8和图9中由控制装置执行的步骤。
图14是本申请实施例的移动机器人的示意性框图。
图14所示的移动机器人7000包括探测单元7001和处理单元7002。
移动机器人7000中的探测单元7001用于在运动过程中对周围环境进行探测,得到环境布局信息,处理单元7002用于对探测单元7001获得的环境布局信息进行处理,最终得到更新后的环境布局地图。
图14所示的移动机器人7000可以执行图10所示的方法中的步骤30010至30050。
进一步的,如图15所示,上述移动机器人7000还可以包括收发单元。
图15是本申请实施例的移动机器人的示意性框图。
图15所示的移动机器人7000除了包括图14所示的探测单元7001和处理单元7002之外,还包括收发单元7003。
其中,探测单元7001用于对周围环境进行探测,得到环境布局信息,处理单元7002用于根据探测单元7001在一次探测周期探测得到的环境布局信息确定一张环境探测地图,收发单元7003用于向控制装置发送M张环境探测地图,收发单元7003还可以从控制装置接收更新后的环境布局地图。
图15所示的移动机器人7000可以相当于图8和图9中的移动机器人,移动机器人7000可以执行图8和图9中由移动机器人执行的步骤。
图16是本申请实施例的更新移动机器人工作地图的装置的硬件结构示意图。图16所示的装置8000包括存储器8001、处理器8002、通信接口8003以及总线8004。其中,存储器8001、处理器8002、通信接口8003通过总线8004实现彼此之间的通信连接。
上述装置8000既可以是控制移动机器人的控制装置,也可以移动机器人。
装置8000中的处理器8002既可以获取相应的数据并实现对相应数据的处理(例如,获取M张环境布局地图,根据该M张环境布局地图最终得到更新后的环境布局地图),也可以实现对移动机器人的控制(例如,控制移动机器人执行回退操作,控制移动机器人清除当前物体分布信息)。
存储器8001可以是只读存储器(read only memory,ROM),静态存储设备,动态存储设备或者随机存取存储器(random access memory,RAM)。存储器8001可以存储程序,当存储器8001中存储的程序被处理器8002执行时,处理器8002用于执行本申请实施例的更新移动机器人工作地图的方法的各个步骤。
处理器8002可以采用通用的中央处理器(central processing unit,CPU),微处理器,应用专用集成电路(application specific integrated circuit,ASIC),图形处理器(graphics processing unit,GPU)或者一个或多个集成电路,用于执行相关程序,以实现本申请方法实施例的更新移动机器人工作地图的方法。
处理器8002还可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,本申请的更新移动机器人工作地图的方法的各个步骤可以通过处理器8002中的硬件的集成逻辑电路或者软件形式的指令完成。
上述处理器8002还可以是通用处理器、数字信号处理器(digital signal processing,DSP)、ASIC、现成可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以 是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器8001,处理器8002读取存储器8001中的信息,结合其硬件完成本装置中包括的单元所需执行的功能,或者执行本申请方法实施例的更新移动机器人工作地图的方法。
通信接口8003使用例如但不限于收发器一类的收发装置,来实现装置8000与其他设备或通信网络之间的通信。例如,可以通过通信接口8003获取待构建的神经网络的信息以及构建神经网络过程中需要的训练数据。
总线8004可包括在装置8000各个部件(例如,存储器8001、处理器8002、通信接口8003)之间传送信息的通路。
上述装置5000中的获取单元5001和处理单元5002相当于装置8000中的处理器8002。
上述装置6000中的获取单元6001和处理单元6002相当于装置8000中的处理器8002,收发单元6003相当于装置8000中的通信接口8003。
上述图14或者图15所示的装置7000中的探测单元7001和处理单元7002相当于装置8000中的处理器6002,上述图15所示的装置7000中的收发单元7003相当于装置8000中的通信接口8003。
图17是本申请实施例的移动机器人的示意性框图。
图17所示的移动机器人9000包括很多功能模块,这些功能模块按照相互的支撑关系可以分成不同的层次,其中,底层的模块对上层模块的功能具有支撑作用。
移动机器人9000具体包括:运动机器人平台9011、传感器9021、通信系统9022、操作系统9023、运动规划9024、激光SLAM 9031、规划导航9032、环境记忆9033、任务管理9041、异常和恢复9042、智能服务9051。
下面对各个模块或者单元进行简单的介绍。
机器人运动平台9011包括机器人底盘,电机驱动单元,电源管理单元,主控单元等硬件单元。机器人运动平台9011可以根据一定的指令来控制机器人的运动。例如,在本申请中,当移动机器人需要从当前探测点回退到第一探测点时,可以通过机器人运动平台来控制移动机器人从当前探测点运动到第一探测点。
传感器9021具体可以包括激光雷达,编码器,陀螺仪、惯性测量单元(inertial measurement unit,IMU),以及超声传感器,红外传感器等等。在本申请中,传感器9021可以对周围的物体进行探测,得到物体分布信息。
通信系统9022可以使用串口通信,以太网通信或者CAN总线系统进行通信。在本申请中,移动机器人可以通过该通信系统9022与控制该移动机器人的控制装置进行通信,例如,通过移动机器人可以通过该通信系统9022向控制装置发送探测得到的物体分布信息,控制装置可以通过该通信系统9022向该移动机器人发送更新后的环境布局地图。
上述通信系统9022可以相当于装置8000中的通信接口8003。
操作系统9023可以是Linux系统。
运动规划9024能够对机器人的行走路径进行规划。
激光SLAM 9031是指主要用激光实现同步建图定位的算法模块,激光SLAM 9031生 成的地图格式为栅格地图。在本申请中,激光SLAM 9031可以根据探测得到的物体分布信息来生成环境探测地图。
规划导航9032负责完成机器人自主运动以及避障,也包括其他服务任务需要的全覆盖等功能。
环境记忆模块9033可以用于保存机器人获取到的环境布局地图。例如,本申请中,环境记忆模块9033可以保存环境布局地图,并在获取到更新后的环境布局地图之后,对原来的环境布局地图进行更新。
任务管理9041主要是完成机器人的状态管理、用户指令交互、工作任务管理。
异常和恢复9042用于在机器人出现异常时进行恢复。在本申请中,异常和恢复9042能够在当前环境探测地图与环境布局地图之间的差异较大时能够控制移动机器人从当前探测点回退到第一探测点,并控制从第一探测点重新开始探测。
智能服务9051,在上述主要模块的基础上,智能服务9051可以自主智能的完成为家庭或客户服务工作。
智能服务9051可以包含与用户进行交互的界面,通过该界面用户能够灵活设置任务,对执行任务的相关参数进行调整等等。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、ROM、RAM、磁碟或者光盘等各种可以存储程 序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (28)

  1. 一种更新移动机器人工作地图的方法,其特征在于,包括:
    获取M张环境探测地图,所述M张环境探测地图是根据移动机器人在运动过程中探测得到的物体分布信息确定的,M为大于1的整数;
    对所述M张环境探测地图进行融合,以获取当前融合得到的环境融合地图;
    对所述当前融合得到的环境融合地图的像素值和当前保存的环境布局地图的像素值进行加权处理,得到更新后的环境布局地图,其中,所述当前保存的环境布局地图是对上次融合得到的环境融合地图和上次保存的环境布局地图进行加权处理得到的。
  2. 如权利要求1所述的方法,其特征在于,在对所述当前融合得到的环境融合地图的像素值和所述当前保存的环境布局地图的像素值进行加权处理时,所述当前融合得到的环境融合地图的像素值和所述当前保存的环境布局地图的像素值对应的权重分别为第一权重和第二权重,所述第一权重和所述第二权重的大小是根据所述移动机器人工作时的地图更新需求确定的。
  3. 如权利要求2所述的方法,其特征在于,所述第一权重和所述第二权重的大小是根据所述移动机器人的地图更新需求确定的,包括:
    所述第一权重与所述移动机器人工作时需要的环境布局地图更新频率为正相关关系,所述第二权重与所述移动机器人工作时需要的环境布局地图更新频率为反相关关系。
  4. 如权利要求1-3中任一项所述的方法,其特征在于,对所述M张环境探测地图进行融合,以获取当前融合得到的环境融合地图,包括:
    从所述M张环境探测地图中确定出N张环境探测地图,所述N张环境探测地图中任意两张环境探测地图之间的一致度大于或者等于第一阈值,N为小于或者等于M的正整数;
    对所述N张环境探测地图进行融合,得到所述当前融合得到环境融合地图。
  5. 如权利要求1-4中任一项所述的方法,其特征在于,所述方法还包括:
    根据当前物体分布信息确定当前环境探测地图,所述当前物体分布信息是所述移动机器人在预设间隔内或者预设距离内探测到的物体分布信息,所述预设间隔是所述移动机器人到达当前探测点之前的一段时间间隔,所述预设距离是所述移动机器人到达所述当前探测点之前移动的一段距离;
    在所述当前环境探测地图与所述环境布局地图的一致度小于第二阈值的情况下,确定所述移动机器人在对周围环境进行探测时发生异常;
    控制所述平面移动人进行异常恢复。
  6. 如权利要求5所述的方法,其特征在于,所述控制所述平面移动人进行异常恢复,包括:
    控制所述移动机器人从所述当前探测点回退到第一探测点,所述第一探测点与所述当前探测点之间的距离为预设距离;
    控制所述移动机器人从所述第一探测点开始重新对周围环境进行探测,以重新获取物体分布信息。
  7. 如权利要求5或6所述的方法,其特征在于,所述方法还包括:
    控制所述移动机器人清除所述当前物体分布信息。
  8. 如权利要求1-7中任一项所述的方法,其特征在于,所述M张环境探测地图均位于同一坐标系中。
  9. 如权利要求8所述的方法,其特征在于,所述M张环境探测地图均位于参考坐标系中,所述参考坐标系的原点位于以下任意一个位置中:
    所述移动机器人的充电座所在的位置;
    所述移动机器人结束任务后的停靠位置;
    与所述移动机器人匹配的垃圾中转站所在的位置。
  10. 如权利要求1-9中任一项所述的方法,其特征在于,所述M张环境探测地图是根据移动机器人在第一工作场所下执行工作任务时探测得到的物体分布信息确定的。
  11. 如权利要求10所述的方法,其特征在于,所述M张环境探测地图分别是根据M个物体分布信息确定的,所述M个物体分布信息分别是所述移动机器人执行M次工作任务时探测得到的物体分布信息。
  12. 一种更新移动机器人工作地图的方法,其特征在于,包括:
    从移动机器人获取M张环境探测地图,所述M张环境探测地图是根据移动机器人在运动过程中探测得到的物体分布信息确定的,M为大于1的整数;
    对M张环境探测地图进行融合,以获取当前融合得到的环境融合地图;
    从所述移动机器人获取所述移动机器人当前保存的环境布局地图;
    对所述当前融合得到的环境融合地图的像素值和当前保存的环境布局地图的像素值进行加权处理,得到更新后的环境布局地图,其中,所述当前保存的环境布局地图是对上次融合得到的环境融合地图和上次保存的环境布局地图进行加权处理得到的;
    将所述更新后的环境布局地图发送给所述移动机器人。
  13. 如权利要求12所述的方法,其特征在于,在对M张环境探测地图进行融合之前,所述方法还包括:
    确定是否获取到M张环境探测地图。
  14. 如权利要求12或13所述的方法,其特征在于,所述方法还包括:
    从所述移动机器人获取当前物体分布信息,所述当前物体分布信息是所述移动机器人在预设间隔内或者预设距离内探测到的物体分布信息,所述预设间隔是所述移动机器人到达当前探测点之前的一段时间间隔,所述预设距离是所述移动机器人到达所述当前探测点之前移动的一段距离;
    根据所述当前物体分布信息确定当前环境探测地图;
    在所述当前环境探测地图与所述当前保存的环境布局地图的一致度小于第二阈值的情况下,确定所述移动机器人在对周围环境进行探测时发生异常;
    向所述移动机器人发送异常恢复指令,所述异常恢复指令用于指示所述移动机器人进行异常恢复。
  15. 如权利要求14所述的方法,其特征在于,向所述移动机器人发送异常恢复指令,包括:
    向所述移动机器人发送回退指令,所述回退指令用于指示所述移动机器人从所述当前 探测点回退到第一探测点,所述第一探测点与所述当前探测点之间的距离为预设距离;
    向所述移动机器人发送重新探测指令,所述重新探测指令用于指示所述移动机器人从所述第一探测点开始重新对周围环境进行探测,以重新获取物体分布信息。
  16. 如权利要求14或15所述的方法,其特征在于,所述方法还包括:
    向所述移动机器人发送清除指令,所述清除指令用于清除所述当前物体分布信息。
  17. 一种更新移动机器人工作地图的方法,其特征在于,包括:
    在运动过程中对周围环境进行探测,得到环境布局信息;
    根据一次探测周期探测得到的环境布局信息确定一张环境探测地图;
    在获取到M张环境探测地图的情况下,对所述M张环境探测地图进行融合,以获得当前融合得到的环境融合地图;
    对所述当前融合得到的环境融合地图的像素值和当前保存的环境布局地图的像素值进行加权处理,得到更新后的环境布局地图,其中,所述当前保存的环境布局地图是对上次融合得到的环境融合地图和上次保存的环境布局地图进行加权处理得到的。
  18. 如权利要求17所述的方法,其特征在于,在对M张环境探测地图进行融合之前,所述方法还包括:
    确定是否获取到所述M张环境探测地图。
  19. 如权利要求17或18所述的方法,其特征在于,所述方法还包括:
    根据当前物体分布信息确定当前环境探测地图,所述当前物体分布信息是所述移动机器人在预设间隔内或者预设距离内探测到的物体分布信息,所述预设间隔是所述移动机器人到达当前探测点之前的一段时间间隔,所述预设距离是所述移动机器人到达所述当前探测点之前移动的一段距离;
    在所述当前环境探测地图与所述当前保存的环境布局地图的一致度小于第二阈值的情况下,确定所述移动机器人在对周围环境进行探测时发生异常;
    进行异常恢复。
  20. 如权利要求19所述的方法,其特征在于,所述进行异常恢复,包括:
    从所述当前探测点回退到第一探测点,所述第一探测点与所述当前探测点之间的距离为预设距离;
    从所述第一探测点开始重新对周围环境进行探测,以重新获取物体分布信息。
  21. 如权利要求19或20所述的方法,其特征在于,所述方法还包括:
    清除所述当前物体分布信息。
  22. 一种更新移动机器人工作地图的装置,其特征在于,所述装置包括获取单元和处理单元,所述获取单元和处理单元用于执行如权利要求1-11中任一项所述的方法。
  23. 一种更新移动机器人工作地图的装置,其特征在于,所述装置包括获取单元、处理单元和发送单元,所述获取单元、处理单元和发送单元用于执行如权利要求12-16中任一项所述的方法。
  24. 一种移动机器人,其特征在于,所述移动机器人包括探测模块和处理模块,所述探测模块和处理模块用于执行如权利要求17-21中任一项所述的方法。
  25. 一种更新移动机器人工作地图的装置,其特征在于,包括:
    存储器,用于存储程序;
    处理器,用于执行所述存储器存储的程序,当所述存储器存储的程序被执行时,所述处理器用于执行如权利要求1-11中任一项所述的方法。
  26. 一种更新移动机器人工作地图的装置,其特征在于,包括:
    收发器;
    存储器,用于存储程序;
    处理器,用于执行所述存储器存储的程序,当所述存储器存储的程序被执行时,所述收发器和所述处理器用于执行如权利要求12-16中任一项所述的方法。
  27. 一种移动机器人,其特征在于,包括:
    存储器,用于存储程序;
    处理器,用于执行所述存储器存储的程序,当所述存储器存储的程序被执行时,所述处理器用于执行如权利要求17-21中任一项所述的方法。
  28. 一种计算机存储介质,其特征在于,所述计算机存储介质存储有程序代码,所述程序代码包括用于执行如权利要求1-11或者12-16或者17-21中任一项所述的方法中的步骤的指令。
PCT/CN2020/085231 2019-07-02 2020-04-17 更新移动机器人工作地图的方法、装置及存储介质 WO2021000630A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20835590.9A EP3974778B1 (en) 2019-07-02 2020-04-17 Method and apparatus for updating working map of mobile robot, and storage medium
US17/565,640 US11896175B2 (en) 2019-07-02 2021-12-30 Method and apparatus for updating working map of mobile robot, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910588829.0 2019-07-02
CN201910588829.0A CN112179361B (zh) 2019-07-02 2019-07-02 更新移动机器人工作地图的方法、装置及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/565,640 Continuation US11896175B2 (en) 2019-07-02 2021-12-30 Method and apparatus for updating working map of mobile robot, and storage medium

Publications (1)

Publication Number Publication Date
WO2021000630A1 true WO2021000630A1 (zh) 2021-01-07

Family

ID=73914947

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/085231 WO2021000630A1 (zh) 2019-07-02 2020-04-17 更新移动机器人工作地图的方法、装置及存储介质

Country Status (4)

Country Link
US (1) US11896175B2 (zh)
EP (1) EP3974778B1 (zh)
CN (1) CN112179361B (zh)
WO (1) WO2021000630A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113391318A (zh) * 2021-06-10 2021-09-14 上海大学 一种移动机器人定位方法及系统

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3739361A1 (en) * 2019-05-13 2020-11-18 Aptiv Technologies Limited Method and system for fusing occupancy maps
CN113238557B (zh) * 2021-05-17 2024-05-07 珠海一微半导体股份有限公司 一种建图异常的识别及恢复方法、计算机可读存储介质和移动机器人
US11640166B2 (en) * 2021-06-29 2023-05-02 Nanning Fulian Fugui Precision Industrial Co., Ltd. Method, mobile device and cleaning robot for specifying cleaning areas
CN113455965B (zh) * 2021-06-30 2022-05-27 广州科语机器人有限公司 清洁机器人控制方法、装置、介质和清洁机器人
CN117243529A (zh) * 2022-06-09 2023-12-19 速感科技(北京)有限公司 拖地机器人及其喷水控制方法和装置以及可读存储介质
CN114895691B (zh) * 2022-07-13 2022-12-02 深之蓝(天津)水下智能科技有限公司 泳池清洁机器人的路径规划方法和装置
DE102022214212B3 (de) 2022-12-21 2024-05-29 BSH Hausgeräte GmbH Kartografieren einer Bodenfläche

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101975951A (zh) * 2010-06-09 2011-02-16 北京理工大学 一种融合距离和图像信息的野外环境障碍检测方法
CN103424112A (zh) * 2013-07-29 2013-12-04 南京航空航天大学 一种基于激光平面辅助的运动载体视觉导航方法
CN103645480A (zh) * 2013-12-04 2014-03-19 北京理工大学 基于激光雷达和图像数据融合的地形地貌特征构建方法
CN108446710A (zh) * 2018-01-31 2018-08-24 高睿鹏 室内平面图快速重建方法及重建系统
US20180322646A1 (en) * 2016-01-05 2018-11-08 California Institute Of Technology Gaussian mixture models for temporal depth fusion
CN109764869A (zh) * 2019-01-16 2019-05-17 中国矿业大学 一种双目相机和惯导融合的自主巡检机器人定位与三维地图构建方法

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7689321B2 (en) * 2004-02-13 2010-03-30 Evolution Robotics, Inc. Robust sensor fusion for mapping and localization in a simultaneous localization and mapping (SLAM) system
DK2952993T3 (en) * 2014-06-05 2018-07-30 Softbank Robotics Europe PROCEDURE FOR MAKING A CARD OF LIKELIHOOD FOR ONE OF THE ABSENCE OR EXISTENCE OF BARRIERS FOR AN AUTONOMOUS ROBOT
CN105652864A (zh) * 2014-11-14 2016-06-08 科沃斯机器人有限公司 自移动机器人构建地图的方法及利用该地图的作业方法
US9630619B1 (en) * 2015-11-04 2017-04-25 Zoox, Inc. Robotic vehicle active safety systems and methods
KR101868374B1 (ko) * 2016-10-20 2018-06-18 엘지전자 주식회사 이동 로봇의 제어방법
US10197413B2 (en) * 2016-11-26 2019-02-05 Thinkware Corporation Image processing apparatus, image processing method, computer program and computer readable recording medium
US10293485B2 (en) * 2017-03-30 2019-05-21 Brain Corporation Systems and methods for robotic path planning
US9939814B1 (en) * 2017-05-01 2018-04-10 Savioke, Inc. Computer system and method for automated mapping by robots
CN107390681B (zh) * 2017-06-21 2019-08-20 华南理工大学 一种基于激光雷达与地图匹配的移动机器人实时定位方法
CN108759844B (zh) * 2018-06-07 2021-11-16 科沃斯商用机器人有限公司 机器人重定位与环境地图构建方法、机器人及存储介质
CN109192054B (zh) * 2018-07-27 2020-04-28 阿里巴巴集团控股有限公司 一种地图区域合并的数据处理方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101975951A (zh) * 2010-06-09 2011-02-16 北京理工大学 一种融合距离和图像信息的野外环境障碍检测方法
CN103424112A (zh) * 2013-07-29 2013-12-04 南京航空航天大学 一种基于激光平面辅助的运动载体视觉导航方法
CN103645480A (zh) * 2013-12-04 2014-03-19 北京理工大学 基于激光雷达和图像数据融合的地形地貌特征构建方法
US20180322646A1 (en) * 2016-01-05 2018-11-08 California Institute Of Technology Gaussian mixture models for temporal depth fusion
CN108446710A (zh) * 2018-01-31 2018-08-24 高睿鹏 室内平面图快速重建方法及重建系统
CN109764869A (zh) * 2019-01-16 2019-05-17 中国矿业大学 一种双目相机和惯导融合的自主巡检机器人定位与三维地图构建方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113391318A (zh) * 2021-06-10 2021-09-14 上海大学 一种移动机器人定位方法及系统

Also Published As

Publication number Publication date
EP3974778A4 (en) 2022-08-24
EP3974778A1 (en) 2022-03-30
US11896175B2 (en) 2024-02-13
CN112179361A (zh) 2021-01-05
CN112179361B (zh) 2022-12-06
US20220117456A1 (en) 2022-04-21
EP3974778B1 (en) 2023-10-25

Similar Documents

Publication Publication Date Title
WO2021000630A1 (zh) 更新移动机器人工作地图的方法、装置及存储介质
US11747823B2 (en) Monocular modes for autonomous platform guidance systems with auxiliary sensors
JP6879891B2 (ja) 平面セグメントを用いて点群を完成させる方法およびシステム
CN112650255B (zh) 基于视觉与激光雷达信息融合的机器人定位导航方法
US20220261002A1 (en) Autonomous Platform Guidance Systems with Task Planning and Obstacle Avoidance
WO2020223974A1 (zh) 更新地图的方法及移动机器人
CN110587597B (zh) 一种基于激光雷达的slam闭环检测方法及检测系统
US9111351B2 (en) Minimizing drift using depth camera images
US20210183100A1 (en) Data processing method and apparatus
CN111814752A (zh) 室内定位实现方法、服务器、智能移动设备、存储介质
CN111805535A (zh) 一种定位导航方法、装置以及计算机存储介质
WO2022222490A1 (zh) 一种机器人的控制方法及机器人
WO2021081774A1 (zh) 一种参数优化方法、装置及控制设备、飞行器
CN113378605B (zh) 多源信息融合方法及装置、电子设备和存储介质
CN115578433A (zh) 图像处理方法、装置、电子设备及存储介质
JP7351892B2 (ja) 障害物検出方法、電子機器、路側機器、及びクラウド制御プラットフォーム
CN113587928B (zh) 导航方法、装置、电子设备、存储介质及计算机程序产品
WO2022222345A1 (zh) 移动机器人的定位修正方法和装置、存储介质、电子装置
CN114299192B (zh) 定位建图的方法、装置、设备和介质
CN113379850B (zh) 移动机器人控制方法、装置、移动机器人及存储介质
CN109816726A (zh) 一种基于深度滤波器的视觉里程计地图更新方法和系统
CN115578432A (zh) 图像处理方法、装置、电子设备及存储介质
CN115294234B (zh) 图像的生成方法、装置、电子设备和存储介质
CN116576866B (zh) 导航方法和设备
WO2022172831A1 (ja) 情報処理装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20835590

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020835590

Country of ref document: EP

Effective date: 20211223