WO2023005377A1 - Procédé de construction de carte pour robot, et robot - Google Patents

Procédé de construction de carte pour robot, et robot Download PDF

Info

Publication number
WO2023005377A1
WO2023005377A1 PCT/CN2022/094632 CN2022094632W WO2023005377A1 WO 2023005377 A1 WO2023005377 A1 WO 2023005377A1 CN 2022094632 W CN2022094632 W CN 2022094632W WO 2023005377 A1 WO2023005377 A1 WO 2023005377A1
Authority
WO
WIPO (PCT)
Prior art keywords
map
key frame
robot
frame image
point cloud
Prior art date
Application number
PCT/CN2022/094632
Other languages
English (en)
Chinese (zh)
Inventor
曹蒙
Original Assignee
追觅创新科技(苏州)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 追觅创新科技(苏州)有限公司 filed Critical 追觅创新科技(苏州)有限公司
Publication of WO2023005377A1 publication Critical patent/WO2023005377A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the present disclosure relates to the field of artificial intelligence, in particular, to a robot mapping method and the robot.
  • an intelligent mobile robot In the working environment, an intelligent mobile robot must first solve three problems: where is itself, what is the surrounding, and how to get to the destination. The first two refer to how the robot locates in the working environment and how to build a map, that is, Synchronous localization and map construction (Simultaneous localization and mapping, referred to as SLAM) process, the third problem is the path planning and obstacle avoidance of the robot during the movement process, that is, the navigation problem.
  • SLAM Synchronous localization and map construction
  • the third problem is the path planning and obstacle avoidance of the robot during the movement process, that is, the navigation problem.
  • navigation technology is its core technology, and it is also the key technology to realize intelligent and autonomous movement.
  • the prerequisite for realizing navigation is the establishment of a navigation map.
  • the establishment of a robot’s navigation map is generally to map the point cloud data collected by radar onto the entire map, that is, to pass the point cloud data through the coordinates of the coordinate information. Transformation, mapping to the coordinate system of the map coordinates of the entire body, specifically, judging whether there is an initialized grid map, if not, initializing the current position coordinates to (0,0,0), and judging Whether there is an initialized grid map (the radar data is converted to the map coordinate system through the coordinate information coordinates), where the grid map is composed of multiple squares with a resolution of 0.05 meters.
  • initialize the grid map If there is no initialized grid map, Then initialize the grid map; if there is an initialized grid map, then obtain the positioning information P of the current point cloud data, traverse the radar point cloud data, and transform the coordinates of N points of the point cloud data to the map through the positioning information P, Find the grid coordinates corresponding to the point cloud mapped to a resolution map with a resolution of 0.05 meters, and mark the grid.
  • the mark can be described as how many times a grid has been mapped by the point cloud, and the The more times you map, the higher the probability of that raster.
  • the purpose of the present disclosure is to provide a robot mapping method and robot to at least solve the problem in the related art that if the positioning information of one frame is inaccurate, errors will occur when the radar point cloud is mapped to the grid map, resulting in positioning errors question.
  • a robot mapping method including:
  • the method also includes:
  • mapping the current map block to the global map includes:
  • mapping the point cloud data of the key frame image to the current map block includes:
  • the key frame image is mapped to the front map block according to the local coordinates of the point cloud data of the key frame image.
  • mapping the current map block to the global map includes:
  • the method before mapping the point cloud data of the key frame image into the current map block according to the location information, the method further includes:
  • the method also includes:
  • jointly optimize the position information of the robot in each key frame image in the current map block and the map information of the global map, and the optimized map block obtained includes:
  • the method further includes:
  • the robot is controlled according to the positioning information, the action track and the navigation map.
  • a robot including:
  • the first obtaining module is used to obtain the key frame image of the robot, and obtain the point cloud data of the key frame image;
  • a second acquiring module configured to acquire position information of the robot corresponding to the key frame image
  • a first mapping module configured to map the point cloud data of the key frame image to the current map block according to the position information
  • the second mapping module is configured to map the current map block to a global map to obtain a navigation map of the robot.
  • the first acquiring module is further configured to respectively acquire a plurality of key frame images collected by the robot at preset time periods or at preset distance intervals, and obtain the plurality of key frame images respectively point cloud data;
  • the second acquisition module is further configured to determine the position information of the robot corresponding to each key frame image according to the point cloud data;
  • the first mapping module is further configured to map the point cloud data of the multiple key frame images into multiple map blocks according to the position information of the robot corresponding to each key frame image;
  • the second mapping module is further configured to map the plurality of map blocks onto the global map to obtain a navigation map of the robot.
  • the second mapping module is also used for
  • the first mapping module includes:
  • the first conversion submodule is used to convert the point cloud data of the key frame image into the machine coordinate system according to the corresponding relationship between the radar coordinate system and the machine coordinate system, and obtain the point cloud data of the key frame image. coordinate;
  • the second conversion sub-module is used to convert the machine coordinates into the world coordinate system according to the corresponding relationship between the radar coordinate system and the machine coordinate system, and obtain the local coordinates of the point cloud data of the key frame image;
  • the mapping submodule is configured to map the key frame image into the front map block according to the local coordinates of the point cloud data of the key frame image.
  • the second mapping module is also used to
  • the device also includes:
  • a third acquiring module configured to acquire the state information of the map block
  • a first determining module configured to determine whether there is an initialized current map block according to the state information of the map block
  • a building module configured to create the current map block if it does not exist.
  • the second determining module is configured to, if present, determine that the number of key frames in the current map block is not saturated according to a preset threshold of the number of key frames that can be accommodated in the map block.
  • the device also includes:
  • An optimization module configured to jointly optimize the position information of the robot in each key frame image in the current map block and the map information of the global map if the number of key frames in the current map block is saturated, Get the optimized map block;
  • a setting module configured to map the optimized map block into the global map, and set the state information of the map block as uninitialized.
  • the optimization module includes:
  • the matching submodule is used to perform ICP matching according to the point cloud data of the current key frame image and the point cloud data of all key frame images before the current key frame image in the current map block to obtain a position error;
  • the correction submodule is used to correct the position error, and obtain the optimal position information of the robot in each key frame image of the predetermined number of key frame images;
  • the second mapping submodule is used to map the point cloud data of each key frame image to the current map block according to the optimized position information of the robot in each key frame image, to obtain the optimized map piece.
  • the device also includes:
  • the fourth obtaining module is used to obtain the positioning information of the robot on the navigation map, and obtain the completed action trajectory of the robot on the navigation map;
  • a control module configured to control the robot according to the positioning information, the action track and the navigation map.
  • the purpose of this disclosure is achieved through the following technical solutions: obtain the key frame image of the robot, and obtain the point cloud data of the key frame image; obtain the position information of the robot in the key frame image; The point cloud data is mapped to the current map block; the current map block is mapped to the global map to obtain the robot's navigation map, which solves the problem of mapping the point cloud data to the map if the positioning information of a frame of image is inaccurate in related technologies Errors will occur on the above, resulting in positioning errors.
  • the present disclosure has the following beneficial effects: divide the map into multiple map blocks, map the position information of the robot to the global map based on the multiple map blocks, and improve the accuracy of robot positioning .
  • FIG. 1 is a hardware structural block diagram of a mobile terminal of a robot mapping method according to an embodiment of the present disclosure
  • FIG. 2 is a flow chart of a robot mapping method according to an embodiment of the present disclosure
  • FIG. 3 is a flow chart of a mapping method based on multiple map blocks according to an embodiment of the present disclosure
  • FIG. 4 is a structural block diagram of a robot according to an embodiment of the present disclosure.
  • FIG. 1 is a block diagram of the hardware structure of a mobile terminal of a robot mapping method according to an embodiment of the present disclosure.
  • the mobile terminal may include one or more (in FIG. 1 only Show a) processor 102 (processor 102 may include but not limited to microprocessor (Microprocessor Unit, referred to as MPU) or programmable logic device (Programmable logic device, referred to as PLD) and other processing devices and for storing data
  • processor 102 may include but not limited to microprocessor (Microprocessor Unit, referred to as MPU) or programmable logic device (Programmable logic device, referred to as PLD) and other processing devices and for storing data
  • the above-mentioned mobile terminal may also include a transmission device 106 for communication functions and an input-output device 108.
  • the structure shown in Figure 1 is only for illustration, and it does not reflect the above-mentioned
  • the structure of the mobile terminal causes restrictions.
  • the mobile terminal can also include more or less components than those shown in Figure 1, or have different configurations with the same functions as shown in Figure 1 or more functions than those shown in Figure 1 .
  • the memory 104 can be used to store computer programs, for example, software programs and modules of application software, such as the computer program corresponding to the robot mapping method in the embodiment of the present disclosure, and the processor 102 runs the computer program stored in the memory 104, thereby Executing various functional applications and data processing is to realize the above-mentioned method.
  • the memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
  • the memory 104 may further include a memory that is remotely located relative to the processor 102, and these remote memories may be connected to the mobile terminal 10 through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the transmission device 106 is used to receive or transmit data via the network.
  • the specific example of the network mentioned above may include a wireless network provided by the communication provider of the robot 10 .
  • the transmission device 106 includes a network interface controller (NIC for short), which can be connected to other network devices through a base station so as to communicate with the Internet.
  • the transmission device 106 may be a radio frequency (Radio Frequency, RF for short) module, which is used to communicate with the Internet in a wireless manner.
  • RF Radio Frequency
  • FIG. 2 is a flow chart of a method for building a map of a robot according to an embodiment of the disclosure. As shown in FIG. 2 , the process includes the following step:
  • Step S202 acquiring a key frame image of the robot, and acquiring point cloud data of the key frame image
  • Step S204 acquiring the position information of the robot corresponding to the key frame image
  • the point cloud data is collected based on the radar sensor.
  • the position information of the robot can be calculated according to the position of the point cloud data in the robot coordinate system.
  • Step S206 according to the position information, map the point cloud data of the key frame image into the current map block;
  • each map block includes point cloud data of a predetermined number of key frame images
  • Step S208 mapping the current map block to a global map to obtain a navigation map of the robot.
  • the problem in the related art that if the positioning information of a frame of image is inaccurate, errors will occur in mapping the point cloud data to the map, resulting in wrong positioning.
  • the map is divided into multiple map blocks, and the point cloud data of the robot's position information is mapped to the global map based on the multiple map blocks, and the robot is positioned according to the navigation map, which improves the accuracy of the robot positioning.
  • a plurality of key frame images collected by the robot at a preset time period or at a preset distance interval are obtained respectively, and the point cloud data of the plurality of key frame images are respectively obtained ;
  • the point cloud data determine the position information of the robot corresponding to each key frame image; according to the position information of the robot corresponding to each key frame image, the point cloud of the multiple key frame images The data is mapped to multiple map blocks, and further, according to the corresponding relationship between the radar coordinate system and the machine coordinate system, the point cloud data of the multiple key frame images are converted into the machine coordinate system, and the multiple key frame images are obtained.
  • the position information of the robot in the last key frame image is the position information of the current map block, wherein the position information of the current map block is used for the current map block and between the adjacent map blocks of the current map block
  • the above step S206 may specifically include: converting the point cloud data of the key frame image into the machine coordinate system according to the pre-stored correspondence between the radar coordinate system and the machine coordinate system to obtain the key frame image machine coordinates, and then convert the machine coordinates of the key frame image into the machine world coordinate system according to the corresponding relationship between the pre-stored machine coordinate system and the world coordinate system.
  • the radar coordinates of the point cloud data are obtained. There is a corresponding relationship between the system and the machine coordinate system. According to the corresponding relationship, the radar coordinates of the point cloud data can be converted into machine coordinates, and then the machine coordinates can be converted into world coordinates to obtain the local coordinates of the point cloud data of the key frame image.
  • the world coordinates of the point cloud data are obtained; the key frame image is mapped to the previous map block according to the local coordinates of the point cloud data of the key frame image, after obtaining the local coordinates of the point cloud data, it can be based on The coordinates are mapped into the current map tile.
  • the above step S208 may specifically include: according to the position information of the robot in each key frame image of the current map block, the local coordinates of the point cloud data of each key frame image Converting to the coordinate system of the global map, specifically, after the global map is divided into multiple map blocks, the correspondence between the identification information of each map block and the coordinates of each map block in the coordinate system of the global map is stored After the position information of the robot in the key frame image in the current map block is determined, since the position information of the robot is the world coordinate in the world coordinate system, and the coordinate system of the global map is also the world coordinate, it can be based on the key frame in the current map block The position information of the robot in the image converts the local coordinates of the point cloud data of the keyframe image into the coordinate system of the global map.
  • the state information of the map block is acquired, and the state information may indicate whether there is currently an initialized map block; according to the state information of the map block, it is determined whether there is an initialized current map block. map block; if it does not exist, the current map block is established; if it exists, it is determined that the number of key frames in the current map block is not saturated according to the threshold of the number of key frames that can be accommodated by the preset map block, that is, the current map block is not saturated In the case of , you can continue to map the point cloud data of the key frame to the current map block.
  • the position information of the robot in each key frame image in the current map block and the global map is jointly optimized to obtain the optimized map block.
  • the position error is corrected to obtain the optimized position information of the robot in each key frame image of the predetermined number of key frame images; according to the optimization of the robot in each key frame image
  • the position information maps the point cloud data of each key frame image to the current map block to obtain the optimized map block, making the position information of the robot in the current map block more accurate; the optimized map block is mapped to In the global map, set the status information of the map block as uninitialized, so that the map block will be re-established when the map block status information is judged next time, and the establishment of the navigation map can be completed by repeating the above steps.
  • the location information of the robot on the navigation map is obtained, and the completed action track of the robot on the navigation map is obtained; according to the location information, the action The trajectory and the navigation map control the robot, thereby controlling the movement of the robot.
  • This embodiment builds a map based on multiple map blocks.
  • the construction of multiple map blocks is to split the entire map into a collection of N sub-maps. Because if the entire map is used, if there is a pose positioning error, the wrong pose After the point cloud is mapped onto the entire map, the entire map will be polluted, making it extremely difficult to correct the subsequent map.
  • multiple sub-maps are used to replace the entire map, if some places are found to be incorrectly positioned, just need to It is only necessary to separate out the corresponding sub-map blocks for correction, without polluting the entire map, and at the same time, the time-consuming calculation will be reduced.
  • the construction process of multi-map blocks when no initialized map block is detected, a sub-map block is first initialized, and the current point cloud data is mapped to the sub-map block through the current position information of the robot.
  • the size of the map block It can be evaluated in multiple ways, among them: the key frame information stored in the map block is greater than the threshold, and the general key frame number threshold is set to 20. The method used is to judge by using the trajectory interval, sampling a key frame every 0.5 meters, and Map the key frame to its corresponding submap block. When the size of the current submap block is greater than 20, set the map block to a saturated state and re-initialize a new submap block. At the same time, the map block Mapped to the full map.
  • the robot will obtain the current location information and the current complete map information in real time, and match it with the complete map by recording the track it has traveled, and obtain the place on the map that the robot has not yet cleaned, and navigate through the real-time past location and map information At this point, continue to clean until there are no unreached points on the map, complete the cleaning work, and return to charging.
  • the robot needs to know exactly the current position of the machine and the unswept area, and the positioning information is provided by the radar sensor.
  • each positioning information and map information in the map block will be jointly corrected, and then the map blocks will be merged.
  • the entire map in order to reduce the impact of inaccurate positioning information on the entire map, and at the same time improve the accuracy of map construction.
  • the general positioning front-end method is based on the Markov assumption, that is to say, the positioning information between the 20 key frames in a map block is independent of each other, which leads to a deviation in the positioning information of one of the frames. It will be brought into the subsequent positioning, resulting in larger and larger positioning errors.
  • Joint optimization considers that the positioning information in a map block is coupled with each other, and is not solely affected by the last positioning information, but is related to all key frames before the map block.
  • the specific implementation process is in the map block. Assuming that the current key frame is the Nth frame, its current calculated position is Pn. Due to the influence of the previous N-1 frame, the error of Pn is continuously superimposed. The following needs to pass The observation is corrected.
  • the correction method is to use the local point cloud data of the key frame to perform ICP matching with the previous N-1 frame point cloud.
  • the error can be corrected through the local point cloud positioning information to complete the overall joint optimization.
  • the map coordinate system of the last key frame is the position information of the sub-map block. After the optimization is completed, it is only necessary to map the map block to the entire map through this position information.
  • the mapping state information of the current map block where the robot is located includes: determine the mapping state information of the current map block where the robot is located, wherein the mapping state information includes the map-building state; the map-building state is determined by the current state of the machine: when When the machine is in a state of rapid movement, tilting, positioning recovery, slipping, etc., the mapping status is off; when the machine is in a normal state, the mapping status is on.
  • Fig. 3 is a flow chart of a mapping method based on multiple map blocks according to an embodiment of the present disclosure, as shown in Fig. 3 , including:
  • step S304 judging whether the current map block is saturated, if the judging result is no, go to step S305, otherwise go to step S308;
  • step S305 judging whether it is possible to build a map, if the judging result is no, go to step S306, otherwise go to step S307;
  • Multi-map blocks and map buildable state judgment are beneficial to reduce the false construction of map obstacles.
  • the map display is optimized, and the overlapping map problem caused by false construction is reduced.
  • the technical solution of the present disclosure can be embodied in the form of a software product in essence or the part that contributes to the prior art, and the computer software product is stored in a storage medium (such as a read-only memory (Read-Only Memory) Memory, abbreviated as ROM), random access memory (Random Access Memory, abbreviated as RAM), magnetic disk, optical disk), including several instructions to make a terminal device (which can be a mobile phone, computer, server, or network device) etc.) to perform the methods described in the various embodiments of the present disclosure.
  • a storage medium such as a read-only memory (Read-Only Memory) Memory, abbreviated as ROM), random access memory (Random Access Memory, abbreviated as RAM), magnetic disk, optical disk
  • a terminal device which can be a mobile phone, computer, server, or network device
  • a robot is also provided, and the intelligent cleaning device is used to implement the above embodiments and preferred implementation modes, and what has been explained will not be repeated.
  • the term "module” may be a combination of software and/or hardware that realizes a predetermined function.
  • the devices described in the following embodiments are preferably implemented in software, implementations in hardware, or a combination of software and hardware are also possible and contemplated.
  • Fig. 4 is a structural block diagram of a robot according to an embodiment of the present disclosure. As shown in Fig. 4, the device includes:
  • the first obtaining module 42 is used to obtain the key frame image of the robot, and obtain the point cloud data of the key frame image;
  • the second acquiring module 44 is configured to acquire the position information of the robot corresponding to the key frame image
  • the first mapping module 46 is configured to map the point cloud data of the key frame image into the current map block according to the position information
  • the second mapping module 48 is configured to map the current map block onto the global map to obtain the navigation map of the robot.
  • the first acquiring module 42 is further configured to respectively acquire a plurality of key frame images collected by the robot at preset time periods or at preset distance intervals, and acquire the plurality of key frame images respectively The point cloud data of the image;
  • the second acquisition module 44 is further configured to determine the position information of the robot corresponding to each key frame image according to the point cloud data;
  • the first mapping module 46 is further configured to map the point cloud data of the multiple key frame images into multiple map blocks according to the position information of the robot corresponding to each key frame image;
  • the second mapping module 48 is further configured to map the plurality of map blocks onto the global map to obtain a navigation map of the robot.
  • the second mapping module 48 is also used to
  • the first mapping module 46 includes:
  • the first conversion submodule is used to convert the point cloud data of the key frame image into the machine coordinate system according to the corresponding relationship between the radar coordinate system and the machine coordinate system, and obtain the point cloud data of the key frame image. coordinate;
  • the second conversion sub-module is used to convert the machine coordinates into the world coordinate system according to the corresponding relationship between the radar coordinate system and the machine coordinate system, and obtain the local coordinates of the point cloud data of the key frame image;
  • the mapping submodule is configured to map the key frame image into the front map block according to the local coordinates of the point cloud data of the key frame image.
  • the second mapping module 48 is also used to
  • the device also includes:
  • a third acquiring module configured to acquire the state information of the map block
  • a first determining module configured to determine whether there is an initialized current map block according to the state information of the map block
  • a building module configured to create the current map block if it does not exist.
  • the second determining module is configured to, if present, determine that the number of key frames in the current map block is not saturated according to a preset threshold of the number of key frames that can be accommodated in the map block.
  • the device also includes:
  • An optimization module configured to jointly optimize the position information of the robot in each key frame image in the current map block and the map information of the global map if the number of key frames in the current map block is saturated, Get the optimized map block;
  • a setting module configured to map the optimized map block into the global map, and set the state information of the map block as uninitialized.
  • the optimization module includes:
  • the matching submodule is used to perform ICP matching according to the point cloud data of the current key frame image and the point cloud data of all key frame images before the current key frame image in the current map block to obtain a position error;
  • the correction submodule is used to correct the position error, and obtain the optimal position information of the robot in each key frame image of the predetermined number of key frame images;
  • the second mapping submodule is used to map the point cloud data of each key frame image to the current map block according to the optimized position information of the robot in each key frame image, to obtain the optimized map piece.
  • the device also includes:
  • the fourth obtaining module is used to obtain the positioning information of the robot on the navigation map, and obtain the completed action trajectory of the robot on the navigation map;
  • a control module configured to control the robot according to the positioning information, the action track and the navigation map.
  • the above-mentioned modules can be realized by software or hardware. For the latter, it can be realized by the following methods, but not limited to this: the above-mentioned modules are all located in the same processor; or, the above-mentioned modules can be combined in any combination The forms of are located in different processors.
  • Embodiments of the present disclosure also provide a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to execute the steps in any one of the above method embodiments when running.
  • the above-mentioned storage medium may be configured to store a computer program for performing the following steps:
  • the above-mentioned storage medium may include but not limited to: various media capable of storing computer programs such as USB flash drive, read-only memory ROM, random access memory RAM, mobile hard disk, magnetic disk or optical disk.
  • various media capable of storing computer programs such as USB flash drive, read-only memory ROM, random access memory RAM, mobile hard disk, magnetic disk or optical disk.
  • Embodiments of the present disclosure also provide an electronic device, including a memory and a processor, where a computer program is stored in the memory, and the processor is configured to run the computer program to perform the steps in any one of the above method embodiments.
  • the above-mentioned electronic device may further include a transmission device and an input-output device, wherein the transmission device is connected to the above-mentioned processor, and the input-output device is connected to the above-mentioned processor.
  • the above-mentioned processor may be configured to execute the following steps through a computer program:
  • each module or each step of the above-mentioned disclosure can be realized by a general-purpose computing device, and they can be concentrated on a single computing device, or distributed in a network composed of multiple computing devices Alternatively, they may be implemented in program code executable by a computing device so that they may be stored in a storage device to be executed by a computing device, and in some cases in an order different from that shown here
  • the steps shown or described are carried out, or they are separately fabricated into individual integrated circuit modules, or multiple modules or steps among them are fabricated into a single integrated circuit module for implementation.
  • the present disclosure is not limited to any specific combination of hardware and software.

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

La présente divulgation concerne un procédé de construction de carte pour un robot et un robot. Le procédé consiste : à acquérir une image de trame clé d'un robot, et à acquérir des données de nuage de points de l'image de trame clé ; à acquérir des informations d'emplacement du robot dans l'image de trame clé ; en fonction des informations d'emplacement, à mapper les données de nuage de points de l'image de trame clé dans un bloc de carte en cours ; et à mapper le bloc de carte en cours sur une carte globale afin d'obtenir une carte de navigation du robot. La présente divulgation résout le problème dans l'état de la technique selon lequel, si des informations de positionnement d'une image de trame sont imprécises, une erreur peut se produire dans le mappage de données de nuage de points sur une carte, ce qui provoque une erreur de positionnement. Étant donné que la carte est divisée en une pluralité de blocs de carte, les données de nuage de points sont mappées sur la carte globale en fonction de la pluralité de blocs de carte et des informations d'emplacement du robot, et le robot effectue un positionnement en fonction de la carte de navigation, et la précision de positionnement fondé sur le robot est améliorée.
PCT/CN2022/094632 2021-07-27 2022-05-24 Procédé de construction de carte pour robot, et robot WO2023005377A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110852018.4A CN113674351B (zh) 2021-07-27 2021-07-27 一种机器人的建图方法及机器人
CN202110852018.4 2021-07-27

Publications (1)

Publication Number Publication Date
WO2023005377A1 true WO2023005377A1 (fr) 2023-02-02

Family

ID=78540497

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/094632 WO2023005377A1 (fr) 2021-07-27 2022-05-24 Procédé de construction de carte pour robot, et robot

Country Status (2)

Country Link
CN (1) CN113674351B (fr)
WO (1) WO2023005377A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116030213A (zh) * 2023-03-30 2023-04-28 千巡科技(深圳)有限公司 一种多机云边协同地图创建与动态数字孪生方法及系统

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113674351B (zh) * 2021-07-27 2023-08-08 追觅创新科技(苏州)有限公司 一种机器人的建图方法及机器人
CN116012624B (zh) * 2023-01-12 2024-03-26 阿波罗智联(北京)科技有限公司 定位方法、装置、电子设备、介质以及自动驾驶设备

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101000507A (zh) * 2006-09-29 2007-07-18 浙江大学 移动机器人在未知环境中同时定位与地图构建的方法
CN111322993A (zh) * 2018-12-13 2020-06-23 杭州海康机器人技术有限公司 一种视觉定位方法和装置
CN113674351A (zh) * 2021-07-27 2021-11-19 追觅创新科技(苏州)有限公司 一种机器人的建图方法及机器人

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107817795B (zh) * 2017-10-25 2019-11-19 上海思岚科技有限公司 用于控制移动机器人建图的方法与系统
CN109816769A (zh) * 2017-11-21 2019-05-28 深圳市优必选科技有限公司 基于深度相机的场景地图生成方法、装置及设备
CN109887053B (zh) * 2019-02-01 2020-10-20 广州小鹏汽车科技有限公司 一种slam地图拼接方法及系统
CN111609853A (zh) * 2019-02-25 2020-09-01 北京奇虎科技有限公司 三维地图构建方法、扫地机器人及电子设备
CN110276826A (zh) * 2019-05-23 2019-09-24 全球能源互联网研究院有限公司 一种电网作业环境地图的构建方法及系统
CN112132745B (zh) * 2019-06-25 2022-01-04 南京航空航天大学 一种基于地理信息的多子地图拼接特征融合方法
CN111174799B (zh) * 2019-12-24 2023-02-17 Oppo广东移动通信有限公司 地图构建方法及装置、计算机可读介质、终端设备
CN111402332B (zh) * 2020-03-10 2023-08-18 兰剑智能科技股份有限公司 基于slam的agv复合建图与导航定位方法及系统
CN111536964B (zh) * 2020-07-09 2020-11-06 浙江大华技术股份有限公司 机器人定位方法及装置、存储介质
CN112004196B (zh) * 2020-08-24 2021-10-29 唯羲科技有限公司 定位方法、装置、终端及计算机存储介质
CN112123343B (zh) * 2020-11-25 2021-02-05 炬星科技(深圳)有限公司 点云匹配方法、设备及存储介质
CN112595323A (zh) * 2020-12-08 2021-04-02 深圳市优必选科技股份有限公司 机器人及其建图方法和装置
CN112710318B (zh) * 2020-12-14 2024-05-17 深圳市商汤科技有限公司 地图生成方法、路径规划方法、电子设备以及存储介质

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101000507A (zh) * 2006-09-29 2007-07-18 浙江大学 移动机器人在未知环境中同时定位与地图构建的方法
CN111322993A (zh) * 2018-12-13 2020-06-23 杭州海康机器人技术有限公司 一种视觉定位方法和装置
CN113674351A (zh) * 2021-07-27 2021-11-19 追觅创新科技(苏州)有限公司 一种机器人的建图方法及机器人

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
BAICHAQINGHUAN: "Occupancy Grid Map", BLOG CSDN, CSDN, CN, CN, pages 1 - 4, XP009543503, Retrieved from the Internet <URL:https://blog.csdn.net/zhao_ke_xue/article/details/109040676> *
FANG HUI, YANG MING, YANG RU-QING: "Image Alignment Based Outdoor Ground Feature Mapping", SHANGHAI JIAOTONG DAXUE XUEBAO - JOURNAL OF SHANGHAI JIAOTONGUNIVERSITY, SHANGHAI JIATONG UNIVERSITY PRESS, SHANGHAI,, CN, vol. 43, no. 6, 30 June 2009 (2009-06-30), CN , pages 893 - 893, XP093030449, ISSN: 1006-2467, DOI: 10.16183/j.cnki.jsjtu.2009.06.008 *
MORAVEC, H. P. ET AL.: "High resolution maps from wide angle sonar", 1985 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, 28 March 1985 (1985-03-28), pages 116 - 121, XP055147423, DOI: 10.1109/ROBOT.1985.1087316 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116030213A (zh) * 2023-03-30 2023-04-28 千巡科技(深圳)有限公司 一种多机云边协同地图创建与动态数字孪生方法及系统

Also Published As

Publication number Publication date
CN113674351B (zh) 2023-08-08
CN113674351A (zh) 2021-11-19

Similar Documents

Publication Publication Date Title
WO2023005377A1 (fr) Procédé de construction de carte pour robot, et robot
CN111590595B (zh) 一种定位方法、装置、移动机器人及存储介质
CN107179086B (zh) 一种基于激光雷达的制图方法、装置及系统
EP3660618A1 (fr) Construction de carte et positionnement d&#39;un robot
CN109813319B (zh) 一种基于slam建图的开环优化方法及系统
EP3974778B1 (fr) Procédé et appareil pour mettre à jour une carte de travail d&#39;un robot mobile, et support de stockage
CN207164586U (zh) 一种扫地机器人导航系统
US20210255638A1 (en) Area Division and Path Forming Method and Apparatus for Self-Moving Device and Automatic Working System
CN109965781B (zh) 一种扫地机器人协同工作的控制方法、装置及系统
CN110749895B (zh) 一种基于激光雷达点云数据的定位方法
JP6649743B2 (ja) 一致性評価装置および一致性評価方法
CN110895334A (zh) 基于激光雷达和gps融合虚拟墙无人清扫车校准装置及方法
EP4066078A1 (fr) Procédé et appareil de division de zone et de formation de trajet de dispositif automoteur et système de travail automatique
CN113475977B (zh) 机器人路径规划方法、装置及机器人
CN113741438A (zh) 路径规划方法、装置、存储介质、芯片及机器人
CN111679664A (zh) 基于深度相机的三维地图构建方法及扫地机器人
CN111609853A (zh) 三维地图构建方法、扫地机器人及电子设备
WO2020135593A1 (fr) Procédé d&#39;étalonnage pour un diagramme d&#39;enregistrement de balayage de sol, robot de balayage de sol et terminal mobile
CN115981305A (zh) 机器人的路径规划和控制方法、装置及机器人
WO2024007807A1 (fr) Procédé et appareil de correction d&#39;erreur, et dispositif mobile
WO2023160698A1 (fr) Procédé et appareil de planification de trajet de couverture complète dynamique, dispositif de nettoyage et support de stockage
CN112799389B (zh) 自动行走区域路径规划方法及自动行走设备
CN116698014A (zh) 一种基于多机器人激光slam和视觉slam地图融合与拼接方法
CN114935341B (zh) 一种新型slam导航计算视频识别方法及装置
CN109389677B (zh) 房屋三维实景地图的实时构建方法、系统、装置及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22847989

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE