US20210183116A1 - Map building method, computer-readable storage medium and robot - Google Patents

Map building method, computer-readable storage medium and robot Download PDF

Info

Publication number
US20210183116A1
US20210183116A1 US17/075,727 US202017075727A US2021183116A1 US 20210183116 A1 US20210183116 A1 US 20210183116A1 US 202017075727 A US202017075727 A US 202017075727A US 2021183116 A1 US2021183116 A1 US 2021183116A1
Authority
US
United States
Prior art keywords
pixels
map
thinned
boundary
preprocessed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US17/075,727
Other versions
US11593974B2 (en
Inventor
Rui Guo
Zhichao Liu
Jianxin Pang
Youjun Xiong
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ubtech Robotics Corp
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Assigned to UBTECH ROBOTICS CORP LTD reassignment UBTECH ROBOTICS CORP LTD ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GUO, RUI, LIU, Zhichao, PANG, JIANXIN, XIONG, Youjun
Publication of US20210183116A1 publication Critical patent/US20210183116A1/en
Application granted granted Critical
Publication of US11593974B2 publication Critical patent/US11593974B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/383Indoor data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3837Data obtained from a single source
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/0031Geometric image transformation in the plane of the image for topological mapping of a higher dimensional structure on a lower dimensional surface
    • G06T3/06
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • the present disclosure generally relates to robots, and particularly to a map building method, a computer-readable storage medium, and a robot.
  • FIG. 1 is a schematic block diagram of a robot according to one embodiment.
  • FIG. 2 is flowchart of a map building method according to one embodiment.
  • FIG. 3 is an exemplary schematic diagram of an original grayscale map.
  • FIG. 4 is a flowchart of boundary filling according to one embodiment.
  • FIG. 5 is an exemplary schematic diagram of a refined map without ghosting obtained by processing the original grayscale map of FIG. 3 .
  • FIG. 6 is a schematic block diagram of a map building device according to one embodiment.
  • FIG. 1 is a schematic block diagram of an autonomous robot according to one embodiment.
  • the robot 6 includes a processor 60 , a storage 61 , one or more computer programs 62 stored in the storage 61 and executable by the processor 60 .
  • the processor 60 executes the computer programs 62 , the steps in the embodiments of the method for controlling the robot 6 , such as steps S 101 through S 106 in FIG. 2 , steps S 1041 to S 1043 in FIG. 4 , and functions of modules/units in the embodiments, such as units 501 through 506 in FIG. 6 , are implemented.
  • the one or more computer programs 62 may be divided into one or more modules/units, and the one or more modules/units are stored in the storage 61 and executed by the processor 60 .
  • the one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, and the instruction segments are used to describe the execution process of the one or more computer programs 62 in the robot 6 .
  • the one or more computer programs 62 may be divided into a map acquiring module, a preprocessing module, a binarization processing module, a boundary filling module, a binarization thinning module, and a preprocess thinning module, which will be described in detail below.
  • FIG. 1 is merely an example of the robot 6 , and does not limit the robot 6 .
  • the robot 6 may include components different in numbers from those illustrated, or incorporate some other different components.
  • the robot 80 may further include an input and output device, a network access device, a bus, and the like.
  • the processor 60 may be a central processing unit (CPU), a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a programmable logic device, a discrete gate, a transistor logic device, or a discrete hardware component.
  • the general purpose processor may be a microprocessor or any conventional processor or the like.
  • the storage 61 may be an internal storage unit of the robot 6 , such as a hard disk or a memory.
  • the storage 61 may also be an external storage device of the robot 6 , such as a plug-in hard disk, a smart memory card (SMC), and a secure digital (SD) card, or any suitable flash cards.
  • the storage 61 may also include both an internal storage unit and an external storage device.
  • the storage 61 is used to store computer programs, other programs, and data required by the robot.
  • the storage 61 can also be used to temporarily store data that have been output or is about to be output.
  • a map building method includes the following steps.
  • Step S 101 Acquire an original grayscale map.
  • the original grayscale map is a grid map of surrounding environment generated by the robot 6 having one or more laser sensors.
  • FIG. 3 shows an example of the original grayscale map.
  • the black lines represent boundaries such as walls, objects, and the like
  • the white area represents the area where the robot can move
  • the gray areas represent unknown areas. Due to uneven floor, poor sensor performance indicators, and robots running back and forth on the same path many times, the boundaries of the original grayscale map may have ghost image problem and/or boundary interference problem.
  • Step S 102 Preprocess the original grayscale map to obtain a preprocessed map.
  • the pixel value of each pixel in the original grayscale map is generally in the range from 0 to 255.
  • the original grayscale map can be preprocessed.
  • the pixel value of each pixel of the original grayscale map is set to be 0, 205, or 255, which represent black, gray, and white, respectively.
  • the original grayscale map can be preprocessed according to the following formula:
  • minPixel and maxPixel represent preset pixel thresholds, and their specific values can be set according to actual conditions.
  • minPixel is set to 50 and maxPixel is set to 240.
  • the pixel value of a pixel in the original grayscale map is greater than or equal to minPixel and less than maxPixel, then the pixel value of the pixel at the same position as the pixel in the preprocessed map is 205.
  • the pixel value of a pixel in the original grayscale map is greater than or equal to maxPixel, then the pixel value of the pixel at the same position as the pixel in the preprocessed map is 255.
  • Step S 103 Binarize the preprocessed map to obtain a binarized map. Specifically, the pixel value of each pixel of the preprocessed map is set to 0 or 255, which represent black and white respectively, thereby obtaining the binarized map. During binarization, the preprocessed map needs to be traversed pixel by pixel. If a pixel value is 205 or 255, which represents gray or white, then the pixel value of the pixel at the same position in the binarized map is set to 255, which represents white.
  • a pixel value which represents black, it then determines whether there is an effective structure in which black pixels are connected to one another or gray and/or black pixels are connected to one another in a 3 ⁇ 3 grid with the pixel as the center. If so, it means that it is an object or wall exists and the pixel values of the corresponding positions of the binarized map needs to remain 0. Otherwise, it is likely to be isolated sensor noise, which needs to be filtered out, so the pixel value of the pixel at corresponding position of the binarized map is set to 255.
  • Step S 104 Perform a boundary filling to the preprocessed map and the binarized map to obtain a boundary-filled preprocessed map and a boundary-filled binarized map.
  • a black boundary of the binarized map is determined.
  • a region growth algorithm can be used to extract and save all black boundaries in the binary map.
  • the region growth algorithm can be implemented either based on breadth-first traversal or depth-first traversal.
  • a boundary filling is performed to the preprocessed map and the binarized map, with each black pixel within the black boundary of the binarized map being a center, so as to obtain the boundary-filled preprocessed map and the boundary-filled binarized map.
  • Step S 1041 Search for white pixels in eight-neighbor pixels of each of three-value target pixels.
  • the three-value target pixels are pixels at same positions as binary target pixels in the preprocessed map, and the binary target pixels are pixels in the black boundary of the binarized map. If a white pixel is found, then the procedure goes to step S 1041 .
  • Step S 1042 Record each pixel scanned on a ghost detection line until a gray pixel is scanned, if a white pixel is found.
  • the ghost detection line is a ray directed from the found white pixel to a corresponding three-value target pixel.
  • a threshold of the maximum number of connected pixels that can be scanned in the direction of the ghost detection line can be preset. This threshold is denoted as maxDetectedPixel, and its value can be set according to actual conditions. In the embodiment, maxDetectedPixel is set to 5. Generally, in engineering practice, one pixel in a rasterized map is equal to 5 centimeters in real life.
  • the threshold is set to 5, because the effective ghost detection line that can be filled must have two connected black and gray pixels as the endpoints, the ghosting area that can be filled has a maximum width of three consecutive white pixels. That is, ghosting area that is no more than 15 centimeters can be processed.
  • step S 1043 If a number of recorded pixels is less than a preset value (i.e., maxDetectedPixel), and the last recorded pixel is a black pixel, the procedure goes to step S 1043 .
  • a preset value i.e., maxDetectedPixel
  • Step S 1043 Perform a filling to pixels to be filled in the binarized map and the preprocessed map, if a number of recorded pixels is less than a preset value and the last recorded pixel is a black pixel.
  • the white pixels in the recorded pixels may be identified as the pixels to be filled in the preprocessed map, and the pixels in the binarized map at the same position as the pixels to be filled in the preprocessed map may be identified as the pixels to be filled in the binarized map. Finally, the pixels to be filled in the binarized map and the pixels to be filled in the preprocessed map are all set as black pixels.
  • Each black pixel in the black boundary of the binarized map is traversed.
  • the process shown in FIG. 4 is performed so as to perform boundary filling on the preprocessed map and the binarized map.
  • the preprocessed map obtained after the boundary filling is the boundary-filled preprocessed map
  • the binarized map obtained after the boundary filling is the boundary-filled binarized map.
  • Step S 105 Perform a boundary thinning to the boundary-filled binarized map to obtain a thinned binarized map.
  • the map needs a thinning process. That is, all the thick boundaries of the multi-pixel aggregation in the map are thinned to have a width of one pixel.
  • a binary image edge thinning algorithm may be used to refine the boundary of the boundary-filled binarized map to obtain a thinned binarized map.
  • the binary image edge refinement algorithm includes but is not limited to the fast parallel algorithm proposed by Zhang and Suen, Hilditch, Pavlidis, Rosenfeld thinning algorithms, and index table thinning algorithm.
  • Step S 106 Perform a boundary thinning to the boundary-filled preprocessed map, according to the thinned binarized map, to obtain a thinned preprocessed map.
  • the pixels in the boundary-filled preprocessed map that are at the same positions as the black pixels in the thinned binarized map are set as black pixels.
  • the pixels to be thinned in the boundary-filled preprocessed map are determined, and the pixels to be thinned are black pixels at the same positions as the white pixels in the thinned binarized map.
  • the pixels to be thinned are set as white pixels. If the number of white pixels in eight-neighbor pixels of the pixels to be thinned is less than the number of gray pixels, the pixels to be thinned are set as gray pixels. After the processing, the thinned preprocessed map is obtained.
  • the thinned preprocessed map can be directly used as the final result.
  • the thinned preprocessed map has solved the problems of boundary interference and boundary ghosting, it may worsen the problem of black boundary extending to the gray areas and the problem of black objects remaining outside the boundary of black walls.
  • the black boundary of the thinned binarized map can also be extracted (similar to the boundary extraction process in step S 104 ), and a boundary filling is performed to the thinned preprocessed map and the thinned binarized map, with each black pixel within the black boundary of the thinned binarized map being a center, so as to obtain an optimized preprocessed map and the optimized binarized map.
  • the specific process of optimization processing may include the following steps.
  • the counted number of the white pixels is equal to 0, it means that the black boundary is not adjacent to a white area where a robot can move. That is, the black boundary has extended to the gray area. This situation does not comply with the projection scanning law of laser white, black and gray and needs to be filtered out.
  • the specific filtering process is as follows: set the thinned three-value target pixels as gray pixels, and set the thinned binary target pixels as white pixels.
  • the scan lines are rays directed from gray pixels in eight-neighbor pixels of each of the thinned three-value target pixels to a corresponding one of the thinned three-value target pixels.
  • the number of scan lines is the same as the number of gray pixels in the eight-neighbor pixels, which means that the ray from any gray pixel in the eight-neighbor pixels to a thinned three-value target pixel is a scan line.
  • the distance is denoted as maxScannedPixel.
  • the value of the maxScannedPixel can be set according to actual situations.
  • the value is based on the ratio of the actual diameter of the robot to the actual distance which a single pixel in the grid map is equal to. If the actual diameter of the robot is 50 cm, and one pixel on the grid map is equal to a distance of 5 cm in real life, the maxScannedPixel can be set to 10, which means that the maximum scan storage is 10 pixels. If all scanning attempts in the gray and black straight-line directions of the thinned three-value target pixels are blocked by the black boundary, it means that the thinned three-value target pixels are likely to belong to the remaining object boundary outside the black wall boundary. Because the robot cannot be accommodated near the white area, it is impossible to produce the boundary according to the projection law of laser white, black and gray, and these pixels need to be filtered out.
  • the filtering process is as follows: set the thinned three-value target pixels and white pixels on each of scan lines as gray pixels, and set the thinned binary target pixel as white pixels. If at least one scan line is not blocked by black pixels within the preset distance, no action is required.
  • Each black pixel in the black boundary of the thinned binarized map is traversed.
  • the above-mentioned optimization process is performed so as to perform optimization on the thinned preprocessed map and the thinned binarized map.
  • the preprocessed map obtained after the optimization process is the optimized preprocessed map
  • the binarized map obtained after the optimization process is the optimized binarized map.
  • the thinned preprocessed map can be used as the final result, as shown in FIG. 5 , which is the final result obtained by processing the original grayscale map shown in FIG. 3 . It can be seen from the FIG. 5 that the boundary interference and ghosting problems have been effectively eliminated, and a refined map without ghosting has been obtained.
  • the boundary interference and ghosting problems are solved without increasing hardware cost.
  • the boundary interference and ghosting problems can be eliminated, which facilitates wide application.
  • sequence numbers of the foregoing procedures do not indicate an execution sequence.
  • the execution sequence of the procedures should be determined according to functions and internal logic thereof, and should not constitute any limitation to the implementation procedure of the embodiments of the present disclosure.
  • a map building device includes a map acquiring module 501 , a preprocessing module 502 , a binarization processing module 503 , a boundary filling module 504 , a binarization thinning module 505 , and a preprocess thinning module 506 .
  • the map acquiring module 501 is configured to acquire an original grayscale map.
  • the preprocessing module 502 is configured to preprocess the original grayscale map to obtain a preprocessed map.
  • the binarization processing module 503 is configured to binarize the preprocessed map to obtain a binarized map.
  • the boundary filling module 504 is configured to perform a boundary filling to the preprocessed map and the binarized map to obtain a boundary-filled preprocessed map and a boundary-filled binarized map.
  • the binarization thinning module 505 is configured to perform a boundary thinning to the boundary-filled binarized map to obtain a thinned binarized map.
  • the preprocess thinning module 506 is configured to perform a boundary thinning to the boundary-filled preprocessed map, according to the thinned binarized map, to obtain a thinned preprocessed map.
  • the boundary filling module 504 may include a boundary extraction submodule and a boundary filling submodule.
  • the boundary extraction submodule is configured to determine a black boundary of the binarized map.
  • the boundary filling submodule is configured to perform a boundary filling to the preprocessed map and the binarized map, with each black pixel within the black boundary of the binarized map being a center, so as to obtain the boundary-filled preprocessed map and the boundary-filled binarized map.
  • the boundary filling submodule may include a pixel searching unit, a pixel recording unit, and a pixel filling unit.
  • the pixel searching unit is configured to search for white pixels in eight-neighbor pixels of each of three-value target pixels.
  • the three-value target pixels are pixels at same positions as binary target pixels in the preprocessed map, and the binary target pixels are pixels in the black boundary of the binarized map.
  • the pixel recording unit is configured to record each pixel scanned on a ghost detection line until a gray pixel is scanned, if a white pixel is found.
  • the ghost detection line is a ray directed from the found white pixel to a corresponding three-value target pixel.
  • the pixel filling unit is configured to perform a filling to pixels to be filled in the binarized map and the preprocessed map, if a number of recorded pixels is less than a preset value and a last recorded pixel is a black pixel.
  • the pixel filling unit may include a first identifying submodule, a second identifying submodule, and a pixel filling submodule.
  • the first determining submodule is configured to identify white pixels in the recorded pixels as pixels to be filled in the preprocessed map.
  • the second identifying submodule is configured to identify pixels in the binarized map at same positions as pixels to be filled in the preprocessed map as pixels to be filled in the binarized map.
  • the pixel filling submodule is configured to set the pixels to be filled in the binarized map and the pixels to be filled in the preprocessed map as black pixels.
  • the preprocess thinning module 506 may include a first setting submodule, a thinning pixel determining submodule, a second setting submodule, and a third setting submodule.
  • the first setting submodule is configured to set the pixels in the boundary-filled preprocessed map at same positions as the black pixels in the thinned binarized map as black pixels.
  • the thinning pixel determining submodule is configured to determining pixels to be thinned in the boundary-filled preprocessed map.
  • the pixels to be thinned are black pixels at same positions as white pixels in the thinned binarized map.
  • the second setting submodule is configured to set the pixels to be thinned as white pixels, if a number of white pixels in eight-neighbor pixels of the pixels to be thinned is greater than a number of gray pixels.
  • the third setting submodule is configured to set the pixels to be thinned as gray pixels, if the number of white pixels in eight-neighbor pixels of the pixels to be thinned is less than or equal to the number of gray pixels.
  • the map building device may further include a boundary extraction module and an optimization module.
  • the boundary extraction module is configured to determine a black boundary of the thinned binarized map.
  • the optimization module is configured to perform an optimization to the thinned preprocessed map and the thinned binarized map, with each black pixel within the black boundary of the thinned binarized map being a center, so as to obtain optimized preprocessed map and optimized binarized map.
  • the optimization module may include a counting submodule, a first optimization submodule, and a second optimization submodule.
  • the counting submodule is configured to count a number of white pixels in the eight-neighbor pixels of the thinned three-value target pixels.
  • the thinned three-value target pixels are pixels at same positions as thinned binary target pixels in the thinned preprocessed map, and the thinned binary target pixels are pixels in the black boundary of the thinned binarized map.
  • the first optimization submodule is configured to set the thinned three-value target pixels as gray pixels, and set the thinned binary target pixels as white pixels, if the counted number of the white pixels is 0.
  • the second optimization submodule is configured to set the thinned three-value target pixels and white pixels on each of scan lines as gray pixels, and set the thinned binary target pixel as white pixels, if the counted number of the white pixels is greater than 0 and each of the scan line is blocked by black pixels within a preset distance.
  • the scan lines are rays directed from gray pixels in eight-neighbor pixels of each of the thinned three-value target pixels to a corresponding one of the thinned three-value target pixels.
  • the division of the above-mentioned functional units and modules is merely an example for illustration.
  • the above-mentioned functions may be allocated to be performed by different functional units according to requirements, that is, the internal structure of the device may be divided into different functional units or modules to complete all or part of the above-mentioned functions.
  • the functional units and modules in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware or in the form of software functional unit.
  • each functional unit and module is merely for the convenience of distinguishing each other and are not intended to limit the scope of protection of the present disclosure.
  • the specific operation process of the units and modules in the above-mentioned system reference may be made to the corresponding processes in the above-mentioned method embodiments, and are not described herein.
  • the disclosed apparatus (device)/terminal device and method may be implemented in other manners.
  • the above-mentioned apparatus (device)/terminal device embodiment is merely exemplary.
  • the division of modules or units is merely a logical functional division, and other division manner may be used in actual implementations, that is, multiple units or components may be combined or be integrated into another system, or some of the features may be ignored or not performed.
  • the shown or discussed mutual coupling may be direct coupling or communication connection, and may also be indirect coupling or communication connection through some interfaces, devices or units, and may also be electrical, mechanical or other forms.
  • the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual requirements to achieve the objectives of the solutions of the embodiments.
  • the functional units and modules in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware or in the form of software functional unit.
  • the integrated module/unit When the integrated module/unit is implemented in the form of a software functional unit and is sold or used as an independent product, the integrated module/unit may be stored in a non-transitory computer-readable storage medium. Based on this understanding, all or part of the processes in the method for implementing the above-mentioned embodiments of the present disclosure may also be implemented by instructing relevant hardware through a computer program.
  • the computer program may be stored in a non-transitory computer-readable storage medium, which may implement the steps of each of the above-mentioned method embodiments when executed by a processor.
  • the computer program includes computer program codes which may be the form of source codes, object codes, executable files, certain intermediate, and the like.
  • the computer-readable medium may include any primitive or device capable of carrying the computer program codes, a recording medium, a USB flash drive, a portable hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM), a random access memory (RAM), electric carrier signals, telecommunication signals and software distribution media.
  • a computer readable medium does not include electric carrier signals and telecommunication signals.
  • the content included in the computer readable medium could be appropriately increased and decreased according to requirements of legislation and patent practice under judicial jurisdictions.
  • the computer readable medium does not include the electric carrier signal and the telecommunication signal according to the legislation and the patent practice.

Abstract

A method for building a map includes: acquiring an original grayscale map, preprocessing the original grayscale map to obtain a preprocessed map, binarizing the preprocessed map to obtain a binarized map, performing a boundary filling to the preprocessed map and the binarized map to obtain a boundary-filled preprocessed map and a boundary-filled binarized map, performing a boundary thinning to the boundary-filled binarized map to obtain a thinned binarized map, and performing a boundary thinning to the boundary-filled preprocessed map, according to the thinned binarized map, to obtain a thinned preprocessed map.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application claims priority to Chinese Patent Application No. 201911266982.8, filed Dec. 11, 2019, which is hereby incorporated by reference herein as if set forth in its entirety.
  • BACKGROUND 1. Technical Field
  • The present disclosure generally relates to robots, and particularly to a map building method, a computer-readable storage medium, and a robot.
  • 2. Description of Related Art
  • When autonomous robots build maps of the surrounding environment, due to uneven roads, poor sensor performance indicators, and robots running back and forth on the same path many times, the boundaries of the constructed grid map may have a ghost image problem. In order to solve the problem, one way is to improve the performance indicators of specific hardware devices, such as using advanced laser sensors. Another way is to use multi-sensor fusion technology, such as adding camera devices, and introducing visual loop optimization. Although these methods can theoretically eliminate the ghost image problem, they will bring extremely high hardware costs and are difficult to be widely used in actual scenarios.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Many aspects of the present embodiments can be better understood with reference to the following drawings. The components in the drawings are not necessarily drawn to scale, the emphasis instead being placed upon clearly illustrating the principles of the present embodiments. Moreover, in the drawings, all the views are schematic, and like reference numerals designate corresponding parts throughout the several views.
  • FIG. 1 is a schematic block diagram of a robot according to one embodiment.
  • FIG. 2 is flowchart of a map building method according to one embodiment.
  • FIG. 3 is an exemplary schematic diagram of an original grayscale map.
  • FIG. 4 is a flowchart of boundary filling according to one embodiment.
  • FIG. 5 is an exemplary schematic diagram of a refined map without ghosting obtained by processing the original grayscale map of FIG. 3.
  • FIG. 6 is a schematic block diagram of a map building device according to one embodiment.
  • DETAILED DESCRIPTION
  • The disclosure is illustrated by way of example and not by way of limitation in the figures of the accompanying drawings, in which like reference numerals indicate similar elements. It should be noted that references to “an” or “one” embodiment in this disclosure are not necessarily to the same embodiment, and such references can mean “at least one” embodiment.
  • The terms “upper”, “lower”, “left” and “right”, indicating the orientational or positional relationship based on the orientational or positional relationship shown in the drawings, are merely for convenience of description, but are not intended to indicate or imply that the device or elements must have a particular orientation or be constructed and operated in a particular orientation, and therefore should not be construed as limiting the present disclosure. The terms “first” and “second” are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features. The meaning of “multiple” is two or more, unless expressly stated otherwise.
  • FIG. 1 is a schematic block diagram of an autonomous robot according to one embodiment. The robot 6 includes a processor 60, a storage 61, one or more computer programs 62 stored in the storage 61 and executable by the processor 60. When the processor 60 executes the computer programs 62, the steps in the embodiments of the method for controlling the robot 6, such as steps S101 through S106 in FIG. 2, steps S1041 to S1043 in FIG. 4, and functions of modules/units in the embodiments, such as units 501 through 506 in FIG. 6, are implemented.
  • Exemplarily, the one or more computer programs 62 may be divided into one or more modules/units, and the one or more modules/units are stored in the storage 61 and executed by the processor 60. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, and the instruction segments are used to describe the execution process of the one or more computer programs 62 in the robot 6. For example, the one or more computer programs 62 may be divided into a map acquiring module, a preprocessing module, a binarization processing module, a boundary filling module, a binarization thinning module, and a preprocess thinning module, which will be described in detail below.
  • It should be noted that FIG. 1 is merely an example of the robot 6, and does not limit the robot 6. The robot 6 may include components different in numbers from those illustrated, or incorporate some other different components. For example, the robot 80 may further include an input and output device, a network access device, a bus, and the like.
  • The processor 60 may be a central processing unit (CPU), a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a programmable logic device, a discrete gate, a transistor logic device, or a discrete hardware component. The general purpose processor may be a microprocessor or any conventional processor or the like.
  • The storage 61 may be an internal storage unit of the robot 6, such as a hard disk or a memory. The storage 61 may also be an external storage device of the robot 6, such as a plug-in hard disk, a smart memory card (SMC), and a secure digital (SD) card, or any suitable flash cards. Furthermore, the storage 61 may also include both an internal storage unit and an external storage device. The storage 61 is used to store computer programs, other programs, and data required by the robot. The storage 61 can also be used to temporarily store data that have been output or is about to be output.
  • Referring to FIG. 2, in one embodiment, a map building method includes the following steps.
  • Step S101: Acquire an original grayscale map. The original grayscale map is a grid map of surrounding environment generated by the robot 6 having one or more laser sensors. FIG. 3 shows an example of the original grayscale map. In the grayscale map, the black lines represent boundaries such as walls, objects, and the like, the white area represents the area where the robot can move, and the gray areas represent unknown areas. Due to uneven floor, poor sensor performance indicators, and robots running back and forth on the same path many times, the boundaries of the original grayscale map may have ghost image problem and/or boundary interference problem.
  • Step S102: Preprocess the original grayscale map to obtain a preprocessed map. The pixel value of each pixel in the original grayscale map is generally in the range from 0 to 255. To facilitate processing, the original grayscale map can be preprocessed. Specifically, the pixel value of each pixel of the original grayscale map is set to be 0, 205, or 255, which represent black, gray, and white, respectively. In one embodiment, The original grayscale map can be preprocessed according to the following formula:
  • l 1 ( x , y ) = { 0 0 I 0 ( x , y ) < min Pixel 205 min Pixel I 0 ( x , y ) < max Pixel 255 max Pixel I 0 ( x , y ) 255 ,
  • where I0(x,y) and I1(x,y) represent pixel value of any pixel in the original grayscale map and the preprocessed map, respectively, minPixel and maxPixel represent preset pixel thresholds, and their specific values can be set according to actual conditions. In the embodiment, minPixel is set to 50 and maxPixel is set to 240. When the pixel value of a pixel in the original grayscale map is less than minPixel, then the pixel value of the pixel at the same position as the pixel in the preprocessed map is 0. When the pixel value of a pixel in the original grayscale map is greater than or equal to minPixel and less than maxPixel, then the pixel value of the pixel at the same position as the pixel in the preprocessed map is 205. When the pixel value of a pixel in the original grayscale map is greater than or equal to maxPixel, then the pixel value of the pixel at the same position as the pixel in the preprocessed map is 255.
  • Step S103: Binarize the preprocessed map to obtain a binarized map. Specifically, the pixel value of each pixel of the preprocessed map is set to 0 or 255, which represent black and white respectively, thereby obtaining the binarized map. During binarization, the preprocessed map needs to be traversed pixel by pixel. If a pixel value is 205 or 255, which represents gray or white, then the pixel value of the pixel at the same position in the binarized map is set to 255, which represents white. If a pixel value is 0, which represents black, it then determines whether there is an effective structure in which black pixels are connected to one another or gray and/or black pixels are connected to one another in a 3×3 grid with the pixel as the center. If so, it means that it is an object or wall exists and the pixel values of the corresponding positions of the binarized map needs to remain 0. Otherwise, it is likely to be isolated sensor noise, which needs to be filtered out, so the pixel value of the pixel at corresponding position of the binarized map is set to 255.
  • Step S104: Perform a boundary filling to the preprocessed map and the binarized map to obtain a boundary-filled preprocessed map and a boundary-filled binarized map.
  • Specifically, a black boundary of the binarized map is determined. In one embodiment, a region growth algorithm can be used to extract and save all black boundaries in the binary map. The region growth algorithm can be implemented either based on breadth-first traversal or depth-first traversal.
  • Then, a boundary filling is performed to the preprocessed map and the binarized map, with each black pixel within the black boundary of the binarized map being a center, so as to obtain the boundary-filled preprocessed map and the boundary-filled binarized map.
  • The specific process of boundary filling is shown in FIG. 3. Step S1041: Search for white pixels in eight-neighbor pixels of each of three-value target pixels. The three-value target pixels are pixels at same positions as binary target pixels in the preprocessed map, and the binary target pixels are pixels in the black boundary of the binarized map. If a white pixel is found, then the procedure goes to step S1041.
  • Step S1042: Record each pixel scanned on a ghost detection line until a gray pixel is scanned, if a white pixel is found. The ghost detection line is a ray directed from the found white pixel to a corresponding three-value target pixel. A threshold of the maximum number of connected pixels that can be scanned in the direction of the ghost detection line can be preset. This threshold is denoted as maxDetectedPixel, and its value can be set according to actual conditions. In the embodiment, maxDetectedPixel is set to 5. Generally, in engineering practice, one pixel in a rasterized map is equal to 5 centimeters in real life. If the threshold is set to 5, because the effective ghost detection line that can be filled must have two connected black and gray pixels as the endpoints, the ghosting area that can be filled has a maximum width of three consecutive white pixels. That is, ghosting area that is no more than 15 centimeters can be processed.
  • If a number of recorded pixels is less than a preset value (i.e., maxDetectedPixel), and the last recorded pixel is a black pixel, the procedure goes to step S1043.
  • Step S1043: Perform a filling to pixels to be filled in the binarized map and the preprocessed map, if a number of recorded pixels is less than a preset value and the last recorded pixel is a black pixel.
  • The white pixels in the recorded pixels may be identified as the pixels to be filled in the preprocessed map, and the pixels in the binarized map at the same position as the pixels to be filled in the preprocessed map may be identified as the pixels to be filled in the binarized map. Finally, the pixels to be filled in the binarized map and the pixels to be filled in the preprocessed map are all set as black pixels.
  • Each black pixel in the black boundary of the binarized map is traversed. The process shown in FIG. 4 is performed so as to perform boundary filling on the preprocessed map and the binarized map. The preprocessed map obtained after the boundary filling is the boundary-filled preprocessed map, and the binarized map obtained after the boundary filling is the boundary-filled binarized map. Now, the problem of boundary ghosting has been transformed into a boundary interference problem.
  • Step S105: Perform a boundary thinning to the boundary-filled binarized map to obtain a thinned binarized map.
  • In order to solve the boundary interference problem, the map needs a thinning process. That is, all the thick boundaries of the multi-pixel aggregation in the map are thinned to have a width of one pixel. In the embodiment, a binary image edge thinning algorithm may be used to refine the boundary of the boundary-filled binarized map to obtain a thinned binarized map. The binary image edge refinement algorithm includes but is not limited to the fast parallel algorithm proposed by Zhang and Suen, Hilditch, Pavlidis, Rosenfeld thinning algorithms, and index table thinning algorithm.
  • Step S106: Perform a boundary thinning to the boundary-filled preprocessed map, according to the thinned binarized map, to obtain a thinned preprocessed map.
  • In the thinned binarized map obtained through boundary thinning, redundant black pixels are removed, and the actual boundary of the wall or object is changed, so a merge operation is required to update the boundary-filled preprocessed map. Specifically, First, the pixels in the boundary-filled preprocessed map that are at the same positions as the black pixels in the thinned binarized map are set as black pixels. Then, the pixels to be thinned in the boundary-filled preprocessed map are determined, and the pixels to be thinned are black pixels at the same positions as the white pixels in the thinned binarized map. If a number of white pixels in eight-neighbor pixels of the pixels to be thinned is greater than a number of gray pixels, the pixels to be thinned are set as white pixels. If the number of white pixels in eight-neighbor pixels of the pixels to be thinned is less than the number of gray pixels, the pixels to be thinned are set as gray pixels. After the processing, the thinned preprocessed map is obtained.
  • In one embodiment, the thinned preprocessed map can be directly used as the final result. Although the thinned preprocessed map has solved the problems of boundary interference and boundary ghosting, it may worsen the problem of black boundary extending to the gray areas and the problem of black objects remaining outside the boundary of black walls. In other embodiments, after the thinned preprocessed map is obtained, the black boundary of the thinned binarized map can also be extracted (similar to the boundary extraction process in step S104), and a boundary filling is performed to the thinned preprocessed map and the thinned binarized map, with each black pixel within the black boundary of the thinned binarized map being a center, so as to obtain an optimized preprocessed map and the optimized binarized map.
  • Taking any black pixel in the black boundary of the thinned binary map as an example, the specific process of optimization processing may include the following steps.
  • First, count a number of white pixels in the eight-neighbor pixels of the thinned three-value target pixels, wherein the thinned three-value target pixels are pixels at same positions as thinned binary target pixels in the thinned preprocessed map, and the thinned binary target pixels are pixels in the black boundary of the thinned binarized map.
  • If the counted number of the white pixels is equal to 0, it means that the black boundary is not adjacent to a white area where a robot can move. That is, the black boundary has extended to the gray area. This situation does not comply with the projection scanning law of laser white, black and gray and needs to be filtered out. The specific filtering process is as follows: set the thinned three-value target pixels as gray pixels, and set the thinned binary target pixels as white pixels.
  • If the counted number of the white pixels is greater than 0, it can then determine whether each of the scan line is blocked by black pixels within a preset distance. The scan lines are rays directed from gray pixels in eight-neighbor pixels of each of the thinned three-value target pixels to a corresponding one of the thinned three-value target pixels. The number of scan lines is the same as the number of gray pixels in the eight-neighbor pixels, which means that the ray from any gray pixel in the eight-neighbor pixels to a thinned three-value target pixel is a scan line. The distance is denoted as maxScannedPixel. The value of the maxScannedPixel can be set according to actual situations. The value is based on the ratio of the actual diameter of the robot to the actual distance which a single pixel in the grid map is equal to. If the actual diameter of the robot is 50 cm, and one pixel on the grid map is equal to a distance of 5 cm in real life, the maxScannedPixel can be set to 10, which means that the maximum scan storage is 10 pixels. If all scanning attempts in the gray and black straight-line directions of the thinned three-value target pixels are blocked by the black boundary, it means that the thinned three-value target pixels are likely to belong to the remaining object boundary outside the black wall boundary. Because the robot cannot be accommodated near the white area, it is impossible to produce the boundary according to the projection law of laser white, black and gray, and these pixels need to be filtered out. The filtering process is as follows: set the thinned three-value target pixels and white pixels on each of scan lines as gray pixels, and set the thinned binary target pixel as white pixels. If at least one scan line is not blocked by black pixels within the preset distance, no action is required.
  • Each black pixel in the black boundary of the thinned binarized map is traversed. The above-mentioned optimization process is performed so as to perform optimization on the thinned preprocessed map and the thinned binarized map. The preprocessed map obtained after the optimization process is the optimized preprocessed map, and the binarized map obtained after the optimization process is the optimized binarized map.
  • The thinned preprocessed map can be used as the final result, as shown in FIG. 5, which is the final result obtained by processing the original grayscale map shown in FIG. 3. It can be seen from the FIG. 5 that the boundary interference and ghosting problems have been effectively eliminated, and a refined map without ghosting has been obtained.
  • With the above-mentioned method, the boundary interference and ghosting problems are solved without increasing hardware cost. By performing a series of image processing on the original grayscale map, the boundary interference and ghosting problems can be eliminated, which facilitates wide application.
  • It should be understood that, sequence numbers of the foregoing procedures do not indicate an execution sequence. The execution sequence of the procedures should be determined according to functions and internal logic thereof, and should not constitute any limitation to the implementation procedure of the embodiments of the present disclosure.
  • Referring to FIG. 6, in one embodiment, a map building device includes a map acquiring module 501, a preprocessing module 502, a binarization processing module 503, a boundary filling module 504, a binarization thinning module 505, and a preprocess thinning module 506.
  • The map acquiring module 501 is configured to acquire an original grayscale map. The preprocessing module 502 is configured to preprocess the original grayscale map to obtain a preprocessed map. The binarization processing module 503 is configured to binarize the preprocessed map to obtain a binarized map. The boundary filling module 504 is configured to perform a boundary filling to the preprocessed map and the binarized map to obtain a boundary-filled preprocessed map and a boundary-filled binarized map. The binarization thinning module 505 is configured to perform a boundary thinning to the boundary-filled binarized map to obtain a thinned binarized map. The preprocess thinning module 506 is configured to perform a boundary thinning to the boundary-filled preprocessed map, according to the thinned binarized map, to obtain a thinned preprocessed map.
  • In one embodiment, the boundary filling module 504 may include a boundary extraction submodule and a boundary filling submodule. The boundary extraction submodule is configured to determine a black boundary of the binarized map. The boundary filling submodule is configured to perform a boundary filling to the preprocessed map and the binarized map, with each black pixel within the black boundary of the binarized map being a center, so as to obtain the boundary-filled preprocessed map and the boundary-filled binarized map.
  • In one embodiment, the boundary filling submodule may include a pixel searching unit, a pixel recording unit, and a pixel filling unit. The pixel searching unit is configured to search for white pixels in eight-neighbor pixels of each of three-value target pixels. The three-value target pixels are pixels at same positions as binary target pixels in the preprocessed map, and the binary target pixels are pixels in the black boundary of the binarized map. The pixel recording unit is configured to record each pixel scanned on a ghost detection line until a gray pixel is scanned, if a white pixel is found. The ghost detection line is a ray directed from the found white pixel to a corresponding three-value target pixel. The pixel filling unit is configured to perform a filling to pixels to be filled in the binarized map and the preprocessed map, if a number of recorded pixels is less than a preset value and a last recorded pixel is a black pixel.
  • In one embodiment, the pixel filling unit may include a first identifying submodule, a second identifying submodule, and a pixel filling submodule. The first determining submodule is configured to identify white pixels in the recorded pixels as pixels to be filled in the preprocessed map. The second identifying submodule is configured to identify pixels in the binarized map at same positions as pixels to be filled in the preprocessed map as pixels to be filled in the binarized map. The pixel filling submodule is configured to set the pixels to be filled in the binarized map and the pixels to be filled in the preprocessed map as black pixels.
  • In one embodiment, the preprocess thinning module 506 may include a first setting submodule, a thinning pixel determining submodule, a second setting submodule, and a third setting submodule. The first setting submodule is configured to set the pixels in the boundary-filled preprocessed map at same positions as the black pixels in the thinned binarized map as black pixels. The thinning pixel determining submodule is configured to determining pixels to be thinned in the boundary-filled preprocessed map. The pixels to be thinned are black pixels at same positions as white pixels in the thinned binarized map. The second setting submodule is configured to set the pixels to be thinned as white pixels, if a number of white pixels in eight-neighbor pixels of the pixels to be thinned is greater than a number of gray pixels. The third setting submodule is configured to set the pixels to be thinned as gray pixels, if the number of white pixels in eight-neighbor pixels of the pixels to be thinned is less than or equal to the number of gray pixels.
  • In one embodiment, the map building device may further include a boundary extraction module and an optimization module. The boundary extraction module is configured to determine a black boundary of the thinned binarized map. The optimization module is configured to perform an optimization to the thinned preprocessed map and the thinned binarized map, with each black pixel within the black boundary of the thinned binarized map being a center, so as to obtain optimized preprocessed map and optimized binarized map.
  • In one embodiment, the optimization module may include a counting submodule, a first optimization submodule, and a second optimization submodule. The counting submodule is configured to count a number of white pixels in the eight-neighbor pixels of the thinned three-value target pixels. The thinned three-value target pixels are pixels at same positions as thinned binary target pixels in the thinned preprocessed map, and the thinned binary target pixels are pixels in the black boundary of the thinned binarized map. The first optimization submodule is configured to set the thinned three-value target pixels as gray pixels, and set the thinned binary target pixels as white pixels, if the counted number of the white pixels is 0. The second optimization submodule is configured to set the thinned three-value target pixels and white pixels on each of scan lines as gray pixels, and set the thinned binary target pixel as white pixels, if the counted number of the white pixels is greater than 0 and each of the scan line is blocked by black pixels within a preset distance. The scan lines are rays directed from gray pixels in eight-neighbor pixels of each of the thinned three-value target pixels to a corresponding one of the thinned three-value target pixels.
  • Those skilled in the art will understand that, for the convenience and conciseness of description, the specific working processes of the devices, modules, and units described above can refer to the corresponding processes in the foregoing method embodiments, which will not be repeated here.
  • In the above-mentioned embodiments, the description of each embodiment has its own emphasis. For parts that are not described in detail, reference may be made to related descriptions of other embodiments.
  • A person having ordinary skill in the art may clearly understand that, for the convenience and simplicity of description, the division of the above-mentioned functional units and modules is merely an example for illustration. In actual applications, the above-mentioned functions may be allocated to be performed by different functional units according to requirements, that is, the internal structure of the device may be divided into different functional units or modules to complete all or part of the above-mentioned functions. The functional units and modules in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The above-mentioned integrated unit may be implemented in the form of hardware or in the form of software functional unit. In addition, the specific name of each functional unit and module is merely for the convenience of distinguishing each other and are not intended to limit the scope of protection of the present disclosure. For the specific operation process of the units and modules in the above-mentioned system, reference may be made to the corresponding processes in the above-mentioned method embodiments, and are not described herein.
  • A person having ordinary skill in the art may clearly understand that, the exemplificative units and steps described in the embodiments disclosed herein may be implemented through electronic hardware or a combination of computer software and electronic hardware. Whether these functions are implemented through hardware or software depends on the specific application and design constraints of the technical schemes. Those ordinary skilled in the art may implement the described functions in different manners for each particular application, while such implementation should not be considered as beyond the scope of the present disclosure.
  • In the embodiments provided by the present disclosure, it should be understood that the disclosed apparatus (device)/terminal device and method may be implemented in other manners. For example, the above-mentioned apparatus (device)/terminal device embodiment is merely exemplary. For example, the division of modules or units is merely a logical functional division, and other division manner may be used in actual implementations, that is, multiple units or components may be combined or be integrated into another system, or some of the features may be ignored or not performed. In addition, the shown or discussed mutual coupling may be direct coupling or communication connection, and may also be indirect coupling or communication connection through some interfaces, devices or units, and may also be electrical, mechanical or other forms.
  • The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual requirements to achieve the objectives of the solutions of the embodiments.
  • The functional units and modules in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The above-mentioned integrated unit may be implemented in the form of hardware or in the form of software functional unit.
  • When the integrated module/unit is implemented in the form of a software functional unit and is sold or used as an independent product, the integrated module/unit may be stored in a non-transitory computer-readable storage medium. Based on this understanding, all or part of the processes in the method for implementing the above-mentioned embodiments of the present disclosure may also be implemented by instructing relevant hardware through a computer program. The computer program may be stored in a non-transitory computer-readable storage medium, which may implement the steps of each of the above-mentioned method embodiments when executed by a processor. In which, the computer program includes computer program codes which may be the form of source codes, object codes, executable files, certain intermediate, and the like. The computer-readable medium may include any primitive or device capable of carrying the computer program codes, a recording medium, a USB flash drive, a portable hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM), a random access memory (RAM), electric carrier signals, telecommunication signals and software distribution media. It should be noted that the content contained in the computer readable medium may be appropriately increased or decreased according to the requirements of legislation and patent practice in the jurisdiction. For example, in some jurisdictions, according to the legislation and patent practice, a computer readable medium does not include electric carrier signals and telecommunication signals. It should be noted that, the content included in the computer readable medium could be appropriately increased and decreased according to requirements of legislation and patent practice under judicial jurisdictions. For example, in some judicial jurisdictions, the computer readable medium does not include the electric carrier signal and the telecommunication signal according to the legislation and the patent practice.
  • The embodiments above are only illustrative for the technical solutions of the present disclosure, rather than limiting the present disclosure. Although the present disclosure is described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that they still can modify the technical solutions described in the foregoing various embodiments, or make equivalent substitutions on partial technical features; however, these modifications or substitutions do not make the nature of the corresponding technical solution depart from the spirit and scope of technical solutions of various embodiments of the present disclosure, and all should be included within the protection scope of the present disclosure.

Claims (20)

What is claimed is:
1. A computer-implemented method for building a map, comprising executing on a processor steps of:
acquiring an original grayscale map;
preprocessing the original grayscale map to obtain a preprocessed map;
binarizing the preprocessed map to obtain a binarized map;
performing a boundary filling to the preprocessed map and the binarized map to obtain a boundary-filled preprocessed map and a boundary-filled binarized map;
performing a boundary thinning to the boundary-filled binarized map to obtain a thinned binarized map; and
performing a boundary thinning to the boundary-filled preprocessed map, according to the thinned binarized map, to obtain a thinned preprocessed map.
2. The method according to claim 1, wherein the step of performing a boundary filling to the preprocessed map and the binarized map to obtain a boundary-filled preprocessed map and a boundary-filled binarized map comprises:
determining a black boundary of the binarized map; and
performing a boundary filling to the preprocessed map and the binarized map, with each black pixel within the black boundary of the binarized map being a center, so as to obtain the boundary-filled preprocessed map and the boundary-filled binarized map.
3. The method according to claim 2, wherein the step of performing a boundary filling to the preprocessed map and the binarized map, with each black pixel within the black boundary of the binarized map being a center, comprises:
searching for white pixels in eight-neighbor pixels of each of three-value target pixels, wherein the three-value target pixels are pixels at same positions as binary target pixels in the preprocessed map, and the binary target pixels are pixels in the black boundary of the binarized map;
recording each pixel scanned on a ghost detection line until a gray pixel is scanned, if a white pixel is found, wherein the ghost detection line is a ray directed from the found white pixel to a corresponding three-value target pixel; and
performing a filling to pixels to be filled in the binarized map and the preprocessed map, if a number of recorded pixels is less than a preset value and a last recorded pixel is a black pixel.
4. The method according to claim 3, wherein the step of performing a filling to pixels to be filled in the binarized map and the preprocessed map, comprises:
identifying white pixels in the recorded pixels as pixels to be filled in the preprocessed map;
identifying pixels in the binarized map at same positions as pixels to be filled in the preprocessed map as pixels to be filled in the binarized map; and
setting the pixels to be filled in the binarized map and the pixels to be filled in the preprocessed map as black pixels.
5. The method according to claim 1, wherein the step of performing a boundary thinning to the boundary-filled preprocessed map, according to the thinned binarized map, to obtain a thinned preprocessed map, comprises:
setting the pixels in the boundary-filled preprocessed map at same positions as the black pixels in the thinned binarized map as black pixels;
determining pixels to be thinned in the boundary-filled preprocessed map, wherein the pixels to be thinned are black pixels at same positions as white pixels in the thinned binarized map;
setting the pixels to be thinned as white pixels, if a number of white pixels in eight-neighbor pixels of the pixels to be thinned is greater than a number of gray pixels; and
setting the pixels to be thinned as gray pixels, if the number of white pixels in eight-neighbor pixels of the pixels to be thinned is less than or equal to the number of gray pixels.
6. The method according to claim 1, further comprising, after the step of performing a boundary thinning to the boundary-filled preprocessed map, according to the thinned binarized map, to obtain a thinned preprocessed map,
determining a black boundary of the thinned binarized map; and
performing an optimization to the thinned preprocessed map and the thinned binarized map, with each black pixel within the black boundary of the thinned binarized map being a center, so as to obtain optimized preprocessed map and optimized binarized map.
7. The method according to claim 6, wherein the step of performing an optimization to the thinned preprocessed map and the thinned binarized map, with each black pixel within the black boundary of the thinned binarized map being a center, so as to obtain optimized preprocessed map and optimized binarized map, comprises:
counting a number of white pixels in the eight-neighbor pixels of the thinned three-value target pixels, wherein the thinned three-value target pixels are pixels at same positions as thinned binary target pixels in the thinned preprocessed map, and the thinned binary target pixels are pixels in the black boundary of the thinned binarized map;
setting the thinned three-value target pixels as gray pixels, and setting the thinned binary target pixels as white pixels, if the counted number of the white pixels is 0; and
setting the thinned three-value target pixels and white pixels on each of scan lines as gray pixels, and setting the thinned binary target pixel as white pixels, if the counted number of the white pixels is greater than 0 and each of the scan line is blocked by black pixels within a preset distance, wherein the scan lines are rays directed from gray pixels in eight-neighbor pixels of each of the thinned three-value target pixels to a corresponding one of the thinned three-value target pixels.
8. A computer readable storage medium having stored therein instructions, which when executed by a robot, cause the robot to perform a method for building a map, the method comprising executing on a processor steps of:
acquiring an original grayscale map;
preprocessing the original grayscale map to obtain a preprocessed map;
binarizing the preprocessed map to obtain a binarized map;
performing a boundary filling to the preprocessed map and the binarized map to obtain a boundary-filled preprocessed map and a boundary-filled binarized map;
performing a boundary thinning to the boundary-filled binarized map to obtain a thinned binarized map; and
performing a boundary thinning to the boundary-filled preprocessed map, according to the thinned binarized map, to obtain a thinned preprocessed map.
9. The computer readable storage medium according to claim 8, wherein the step of performing a boundary filling to the preprocessed map and the binarized map to obtain a boundary-filled preprocessed map and a boundary-filled binarized map comprises:
determining a black boundary of the binarized map; and
performing a boundary filling to the preprocessed map and the binarized map, with each black pixel within the black boundary of the binarized map being a center, so as to obtain the boundary-filled preprocessed map and the boundary-filled binarized map.
10. The computer readable storage medium according to claim 9, wherein the step of performing a boundary filling to the preprocessed map and the binarized map, with each black pixel within the black boundary of the binarized map being a center, comprises:
searching for white pixels in eight-neighbor pixels of each of three-value target pixels, wherein the three-value target pixels are pixels at same positions as binary target pixels in the preprocessed map, and the binary target pixels are pixels in the black boundary of the binarized map;
recording each pixel scanned on a ghost detection line until a gray pixel is scanned, if a white pixel is found, wherein the ghost detection line is a ray directed from the found white pixel to a corresponding three-value target pixel; and
performing a filling to pixels to be filled in the binarized map and the preprocessed map, if a number of recorded pixels is less than a preset value and a last recorded pixel is a black pixel.
11. The computer readable storage medium according to claim 10, wherein the step of performing a filling to pixels to be filled in the binarized map and the preprocessed map, comprises:
identifying white pixels in the recorded pixels as pixels to be filled in the preprocessed map;
identifying pixels in the binarized map at same positions as pixels to be filled in the preprocessed map as pixels to be filled in the binarized map; and
setting the pixels to be filled in the binarized map and the pixels to be filled in the preprocessed map as black pixels.
12. The computer readable storage medium according to claim 8, wherein the step of performing a boundary thinning to the boundary-filled preprocessed map, according to the thinned binarized map, to obtain a thinned preprocessed map, comprises:
setting the pixels in the boundary-filled preprocessed map at same positions as the black pixels in the thinned binarized map as black pixels;
determining pixels to be thinned in the boundary-filled preprocessed map, wherein the pixels to be thinned are black pixels at same positions as white pixels in the thinned binarized map;
setting the pixels to be thinned as white pixels, if a number of white pixels in eight-neighbor pixels of the pixels to be thinned is greater than a number of gray pixels; and
setting the pixels to be thinned as gray pixels, if the number of white pixels in eight-neighbor pixels of the pixels to be thinned is less than or equal to the number of gray pixels.
13. The computer readable storage medium according to claim 8, further comprising, after the step of performing a boundary thinning to the boundary-filled preprocessed map, according to the thinned binarized map, to obtain a thinned preprocessed map,
determining a black boundary of the thinned binarized map; and
performing an optimization to the thinned preprocessed map and the thinned binarized map, with each black pixel within the black boundary of the thinned binarized map being a center, so as to obtain optimized preprocessed map and optimized binarized map.
14. The computer readable storage medium according to claim 13, wherein the step of performing an optimization to the thinned preprocessed map and the thinned binarized map, with each black pixel within the black boundary of the thinned binarized map being a center, so as to obtain optimized preprocessed map and optimized binarized map, comprises:
counting a number of white pixels in the eight-neighbor pixels of the thinned three-value target pixels, wherein the thinned three-value target pixels are pixels at same positions as thinned binary target pixels in the thinned preprocessed map, and the thinned binary target pixels are pixels in the black boundary of the thinned binarized map;
setting the thinned three-value target pixels as gray pixels, and setting the thinned binary target pixels as white pixels, if the counted number of the white pixels is 0; and
setting the thinned three-value target pixels and white pixels on each of scan lines as gray pixels, and setting the thinned binary target pixel as white pixels, if the counted number of the white pixels is greater than 0 and each of the scan line is blocked by black pixels within a preset distance, wherein the scan lines are rays directed from gray pixels in eight-neighbor pixels of each of the thinned three-value target pixels to a corresponding one of the thinned three-value target pixels.
15. A robot comprising:
one or more processors;
a storage; and
one or more computer programs stored in the storage and configured to be executed by the one or more processors to perform a method, the method comprising steps of:
acquiring an original grayscale map;
preprocessing the original grayscale map to obtain a preprocessed map;
binarizing the preprocessed map to obtain a binarized map;
performing a boundary filling to the preprocessed map and the binarized map to obtain a boundary-filled preprocessed map and a boundary-filled binarized map;
performing a boundary thinning to the boundary-filled binarized map to obtain a thinned binarized map; and
performing a boundary thinning to the boundary-filled preprocessed map, according to the thinned binarized map, to obtain a thinned preprocessed map.
16. The robot according to claim 15, wherein the step of performing a boundary filling to the preprocessed map and the binarized map to obtain a boundary-filled preprocessed map and a boundary-filled binarized map comprises:
determining a black boundary of the binarized map; and
performing a boundary filling to the preprocessed map and the binarized map, with each black pixel within the black boundary of the binarized map being a center, so as to obtain the boundary-filled preprocessed map and the boundary-filled binarized map.
17. The robot according to claim 16, wherein the step of performing a boundary filling to the preprocessed map and the binarized map, with each black pixel within the black boundary of the binarized map being a center, comprises:
searching for white pixels in eight-neighbor pixels of each of three-value target pixels, wherein the three-value target pixels are pixels at same positions as binary target pixels in the preprocessed map, and the binary target pixels are pixels in the black boundary of the binarized map;
recording each pixel scanned on a ghost detection line until a gray pixel is scanned, if a white pixel is found, wherein the ghost detection line is a ray directed from the found white pixel to a corresponding three-value target pixel; and
performing a filling to pixels to be filled in the binarized map and the preprocessed map, if a number of recorded pixels is less than a preset value and a last recorded pixel is a black pixel.
18. The robot according to claim 17, wherein the step of performing a filling to pixels to be filled in the binarized map and the preprocessed map, comprises:
identifying white pixels in the recorded pixels as pixels to be filled in the preprocessed map;
identifying pixels in the binarized map at same positions as pixels to be filled in the preprocessed map as pixels to be filled in the binarized map; and
setting the pixels to be filled in the binarized map and the pixels to be filled in the preprocessed map as black pixels.
19. The robot according to claim 15, wherein the step of performing a boundary thinning to the boundary-filled preprocessed map, according to the thinned binarized map, to obtain a thinned preprocessed map, comprises:
setting the pixels in the boundary-filled preprocessed map at same positions as the black pixels in the thinned binarized map as black pixels;
determining pixels to be thinned in the boundary-filled preprocessed map, wherein the pixels to be thinned are black pixels at same positions as white pixels in the thinned binarized map;
setting the pixels to be thinned as white pixels, if a number of white pixels in eight-neighbor pixels of the pixels to be thinned is greater than a number of gray pixels; and
setting the pixels to be thinned as gray pixels, if the number of white pixels in eight-neighbor pixels of the pixels to be thinned is less than or equal to the number of gray pixels.
20. The robot according to claim 15, further comprising, after the step of performing a boundary thinning to the boundary-filled preprocessed map, according to the thinned binarized map, to obtain a thinned preprocessed map,
determining a black boundary of the thinned binarized map; and
performing an optimization to the thinned preprocessed map and the thinned binarized map, with each black pixel within the black boundary of the thinned binarized map being a center, so as to obtain optimized preprocessed map and optimized binarized map.
US17/075,727 2019-12-11 2020-10-21 Map building method, computer-readable storage medium and robot Active 2041-07-10 US11593974B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911266982.8A CN111063029B (en) 2019-12-11 2019-12-11 Map construction method, map construction device, computer readable storage medium and robot
CN201911266982.8 2019-12-11

Publications (2)

Publication Number Publication Date
US20210183116A1 true US20210183116A1 (en) 2021-06-17
US11593974B2 US11593974B2 (en) 2023-02-28

Family

ID=70300507

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/075,727 Active 2041-07-10 US11593974B2 (en) 2019-12-11 2020-10-21 Map building method, computer-readable storage medium and robot

Country Status (2)

Country Link
US (1) US11593974B2 (en)
CN (1) CN111063029B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220206510A1 (en) * 2020-12-28 2022-06-30 Bear Robotics, Inc. Method, system, and non-transitory computer-readable recording medium for generating a map for a robot

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111681246B (en) * 2020-04-27 2024-01-23 珠海一微半导体股份有限公司 Region segmentation method of laser map
CN113516765B (en) * 2021-06-25 2023-08-11 深圳市优必选科技股份有限公司 Map management method, map management device and intelligent equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100008576A1 (en) * 2008-07-11 2010-01-14 Robinson Piramuthu System and method for segmentation of an image into tuned multi-scaled regions
US9740925B2 (en) * 2012-11-19 2017-08-22 Imds America Inc. Method and system for the spotting of arbitrary words in handwritten documents
US20170285648A1 (en) * 2016-04-01 2017-10-05 Locus Robotics Corporation Navigation using planned robot travel paths
US20190272638A1 (en) * 2016-11-11 2019-09-05 University Of South Florida Automated Stereology for Determining Tissue Characteristics
US20200302135A1 (en) * 2019-03-19 2020-09-24 Sasken Technologies Ltd Method and apparatus for localization of one-dimensional barcodes
US20210019536A1 (en) * 2018-03-29 2021-01-21 Sony Corporation Signal processing device and signal processing method, program, and mobile body
US20210333108A1 (en) * 2018-12-28 2021-10-28 Goertek Inc. Path Planning Method And Device And Mobile Device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6356646B1 (en) * 1999-02-19 2002-03-12 Clyde H. Spencer Method for creating thematic maps using segmentation of ternary diagrams
CN102890780B (en) * 2011-07-19 2015-07-22 富士通株式会社 Image processing device and image processing method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100008576A1 (en) * 2008-07-11 2010-01-14 Robinson Piramuthu System and method for segmentation of an image into tuned multi-scaled regions
US9740925B2 (en) * 2012-11-19 2017-08-22 Imds America Inc. Method and system for the spotting of arbitrary words in handwritten documents
US20170285648A1 (en) * 2016-04-01 2017-10-05 Locus Robotics Corporation Navigation using planned robot travel paths
US20190272638A1 (en) * 2016-11-11 2019-09-05 University Of South Florida Automated Stereology for Determining Tissue Characteristics
US20210019536A1 (en) * 2018-03-29 2021-01-21 Sony Corporation Signal processing device and signal processing method, program, and mobile body
US20210333108A1 (en) * 2018-12-28 2021-10-28 Goertek Inc. Path Planning Method And Device And Mobile Device
US20200302135A1 (en) * 2019-03-19 2020-09-24 Sasken Technologies Ltd Method and apparatus for localization of one-dimensional barcodes

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220206510A1 (en) * 2020-12-28 2022-06-30 Bear Robotics, Inc. Method, system, and non-transitory computer-readable recording medium for generating a map for a robot
US11885638B2 (en) * 2020-12-28 2024-01-30 Bear Robotics, Inc. Method, system, and non-transitory computer-readable recording medium for generating a map for a robot

Also Published As

Publication number Publication date
CN111063029A (en) 2020-04-24
CN111063029B (en) 2023-06-09
US11593974B2 (en) 2023-02-28

Similar Documents

Publication Publication Date Title
US11593974B2 (en) Map building method, computer-readable storage medium and robot
EP2783328B1 (en) Text detection using multi-layer connected components with histograms
EP3617938B1 (en) Lane line processing method and device
CN111382704B (en) Vehicle line pressing violation judging method and device based on deep learning and storage medium
CN107622501B (en) Boundary detection method for medical image
CN104700062A (en) Method and equipment for identifying two-dimension code
CN112509027B (en) Repositioning method, robot, and computer-readable storage medium
US20240020923A1 (en) Positioning method based on semantic information, device and computer-readable storage medium
CN113129323A (en) Remote sensing ridge boundary detection method and system based on artificial intelligence, computer equipment and storage medium
CN112396050A (en) Image processing method, device and storage medium
CN108960247B (en) Image significance detection method and device and electronic equipment
CN108268868B (en) Method and device for acquiring inclination value of identity card image, terminal and storage medium
CN104715250A (en) Cross laser detection method and device
CN109801428B (en) Method and device for detecting edge straight line of paper money and terminal
CN111325199B (en) Text inclination angle detection method and device
CN111256712B (en) Map optimization method and device and robot
CN111753573B (en) Two-dimensional code image recognition method and device, electronic equipment and readable storage medium
CN114022856A (en) Unstructured road travelable area identification method, electronic device and medium
RU2580074C1 (en) Method for automatic segmentation of half-tone complex-structured raster images
CN111767751B (en) Two-dimensional code image recognition method and device
CN113435287A (en) Lawn obstacle recognition method and device, mowing robot and readable storage medium
US11983916B2 (en) Relocation method, mobile machine using the same, and computer readable storage medium
US20220147754A1 (en) Relocation method, mobile machine using the same, and computer readable storage medium
EP3872690A1 (en) Image processing method and apparatus used for lane detection
CN117496465A (en) Scene recognition method and device, computer readable storage medium and robot

Legal Events

Date Code Title Description
AS Assignment

Owner name: UBTECH ROBOTICS CORP LTD, CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GUO, RUI;LIU, ZHICHAO;PANG, JIANXIN;AND OTHERS;REEL/FRAME:054117/0548

Effective date: 20201016

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STCF Information on status: patent grant

Free format text: PATENTED CASE