CN111928866B - Robot map difference updating method and device - Google Patents

Robot map difference updating method and device Download PDF

Info

Publication number
CN111928866B
CN111928866B CN202011035189.XA CN202011035189A CN111928866B CN 111928866 B CN111928866 B CN 111928866B CN 202011035189 A CN202011035189 A CN 202011035189A CN 111928866 B CN111928866 B CN 111928866B
Authority
CN
China
Prior art keywords
map
information
pose
difference
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011035189.XA
Other languages
Chinese (zh)
Other versions
CN111928866A (en
Inventor
周孙春
庞梁
程伟
白静
陈士凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Silan Robot Yancheng Co ltd
Original Assignee
Shanghai Slamtec Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Slamtec Co Ltd filed Critical Shanghai Slamtec Co Ltd
Priority to CN202011035189.XA priority Critical patent/CN111928866B/en
Publication of CN111928866A publication Critical patent/CN111928866A/en
Application granted granted Critical
Publication of CN111928866B publication Critical patent/CN111928866B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The method comprises the steps of positioning the robot according to sensor information and an original map, and issuing a target positioning event according to a positioning result; calculating map difference information based on the target positioning event; releasing map updating information according to the stored map information and the map difference information; and rendering the map according to the map updating information and issuing a new map notification. Therefore, the problem that redundant noise is introduced due to the fact that the calculated amount is large and the calculated amount is one level is avoided, the constructed map is usable, and the constructed map is guaranteed to be matched with the actual working environment.

Description

Robot map difference updating method and device
Technical Field
The application relates to the technical field of robots, in particular to a robot map difference updating method and device.
Background
The map is the basis for the mobile robot to locate and navigate. However, in practical applications, the change of the working scene may cause a situation that the map created in advance is not matched with the actual working environment. If the problem is not processed, the mobile robot has the situations of positioning loss and navigation failure. In order to solve the problem, the prior art mainly ensures that the map is matched with the working environment by always starting the robot to build the map.
However, this technique has the following disadvantages: 1) the calculation amount is large, and the calculation capacity of the robot is excessively consumed when the image building is started all the time; 2) redundant noise points are introduced, and dynamic environment obstacles such as pedestrians are stored in a map when the map is opened; 3) if the robot drifts during navigation, the environment information is wrongly stored in the map, and the map is unusable.
Disclosure of Invention
An object of the present application is to provide a method and an apparatus for robot map difference update, which solve the problems of large calculation amount, introduction of redundant noise, map construction failure and map unavailability in the prior art.
According to an aspect of the present application, there is provided a method for robot map difference update, the method including:
positioning the robot according to the sensor information and the original map, and issuing a target positioning event according to a positioning result;
calculating map difference information based on the target positioning event;
releasing map updating information according to the stored map information and the map difference information;
and rendering the map according to the map updating information and issuing a new map notification.
Further, the positioning of the robot is carried out according to the sensor information and the original map, and the target positioning event is issued according to the positioning result, and the method comprises the following steps:
acquiring sensor information and mileage information, positioning the robot according to the sensor information and an original map, and determining the current pose of the robot;
acquiring a pose of the robot after the last time optimization and a pose of the robot after the current time optimization obtained by optimizing the current pose;
and judging whether to issue a target positioning event or not according to the sensor information, the mileage information, the pose optimized at the last moment, the current pose and the original map.
Further, judging whether to issue a target positioning event according to the sensor information, the mileage information, the pose optimized at the last moment, the current pose and the original map, and the method comprises the following steps:
judging whether the particle convergence value corresponding to the current pose is smaller than a convergence threshold value or not according to the sensor information, the mileage information, the optimized pose at the last moment, the current pose and an original map, and judging the pose distance between the optimized pose at the last moment and the optimized pose at the current moment if the particle convergence value corresponding to the current pose is not smaller than the convergence threshold value;
and if the pose distance is smaller than a distance threshold, evaluating a matching value of a local map obtained by current observation and an original map according to a laser matching method, and if the matching value is smaller than a preset threshold, issuing a target positioning event.
Further, the method comprises:
and judging whether the particle convergence value corresponding to the current pose is smaller than a convergence threshold value or not according to the sensor information, the mileage information, the optimized pose at the last moment, the current pose and the original map, and if so, generating a difference map of the current environment according to the sensor information.
Further, the method comprises:
and if the pose distance is greater than the distance threshold, re-acquiring the sensor information and the mileage information, and judging whether the particle convergence value corresponding to the current pose is smaller than the convergence threshold according to the re-acquired information.
Further, calculating map difference information based on the target positioning event, comprising:
dividing the original map into a plurality of map tiles;
determining a map block where the target positioning event is located and adjacent map blocks to determine a differential map block group;
calculating the difference between each map block in the differential map block group and the map of the area corresponding to the original map to obtain a first difference value;
and if the first difference value is larger than a first threshold value, releasing map difference information belonging to the map block corresponding to the first difference value.
Further, distributing map update information according to the stored map information and the map difference information, comprising:
based on received map difference information, requesting system built-in map information, acquiring stored map information, and calculating the difference between the stored map information and the map difference information to obtain a second difference value;
and if the second difference value is larger than a second threshold value, releasing map updating information.
According to another aspect of the present application, there is also provided an apparatus for robot map difference update, the apparatus including: a positioning module, a map difference calculating module, a map difference analyzing module and a map updating module,
the positioning module is used for positioning the robot according to the sensor information and the original map and issuing a target positioning event to the map difference calculation module according to a positioning result;
the map difference calculation module is used for receiving the target positioning event, calculating map difference information and issuing the map difference information;
the map difference analysis module is used for issuing map updating information according to the received map difference information and the stored map information;
the map updating module is used for rendering the map according to the map updating information and issuing a new map notification.
According to yet another aspect of the present application, there is also provided a computer readable medium having computer readable instructions stored thereon, the computer readable instructions being executable by a processor to implement the method as described above.
Compared with the prior art, the robot is positioned according to the sensor information and the original map, and the target positioning event is issued according to the positioning result; calculating map difference information based on the target positioning event; releasing map updating information according to the stored map information and the map difference information; and rendering the map according to the map updating information and issuing a new map notification. Therefore, the problem that redundant noise is introduced due to the fact that the calculated amount is large and the calculated amount is one level is avoided, the constructed map is usable, and the constructed map is guaranteed to be matched with the actual working environment.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
fig. 1 illustrates a schematic structural diagram of an apparatus for robot map difference update provided according to an aspect of the present application;
FIG. 2 illustrates a flow diagram of a method for robot map difference update provided in accordance with an aspect of the present application;
FIG. 3 is a flow diagram illustrating a process for determining a low location event in an exemplary embodiment of the present application;
FIG. 4 is a flow chart illustrating a method for calculating map differences according to an embodiment of the present disclosure;
fig. 5 is a flowchart illustrating a method for map difference analysis in an embodiment of the present application.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present application is described in further detail below with reference to the attached figures.
In a typical configuration of the present application, the terminal, the device serving the network, and the trusted party each include one or more processors (e.g., Central Processing Units (CPUs)), input/output interfaces, network interfaces, and memory.
The Memory may include volatile Memory in a computer readable medium, Random Access Memory (RAM), and/or nonvolatile Memory such as Read Only Memory (ROM) or flash Memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, Phase-Change RAM (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), electrically Erasable Programmable Read-Only Memory (EEPROM), flash Memory or other Memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic Disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
Fig. 1 is a schematic structural diagram of an apparatus for robot map difference update according to an aspect of the present application, the apparatus including: the system comprises a positioning module 11, a map difference calculation module 12, a map difference analysis module 13 and a map updating module 14, wherein the positioning module 11 positions the robot according to sensor information and an original map, and issues a target positioning event to the map difference calculation module 12 according to a positioning result; the map difference calculation module 12 receives the target positioning event, calculates map difference information, and distributes the map difference information; the map difference analysis module 13 issues map update information according to the received map difference information and the stored map information; the map update module 14 renders a map according to the map update information and issues a new map notification. Here, the robot positioning module performs positioning using the sensor information z and the original map m, and issues a target positioning event to the map difference calculation module, wherein the target positioning event is a low positioning event (E)low) This means that the robot has a large difference in current observation or environment, or the robot particles do not converge. Receiving robot low-positioning event E by map difference calculation modulelowCalculating and issuing difference information; the map difference analysis module receives map difference information, analyzes the stored difference map and issues update information; and the map updating module receives the map updating information and renders a map to issue a new map notification. Therefore, the problems of high calculation amount, introduction of redundant noise points, map construction failure and the like can be solved.
Fig. 2 shows a flowchart of a method for robot map difference update according to an aspect of the present application, the method comprising: step S11 to step S14,
in step S11, positioning the robot based on the sensor information and the original map, and issuing a target positioning event based on the positioning result; here, the sensor information is information observed by the robot using the sensor, and the target positioning event is a low positioning event (E)low) This means that the robot has a large difference in the current observed environment or the robot particles do not converge. The robot uses the sensor information z and the original map m to carry out positioning and issues a low positioning event Elow
In step S12, map difference information is calculated based on the target positioning event; here, when the map difference calculation module of the robot receives the issued low localization event, the map difference information is calculated and issued.
Subsequently, in step S13, map update information is distributed based on the stored map information and the map difference information; here, upon receiving the distributed map difference information, the difference between the already stored map information and the map difference information is analyzed, thereby distributing the map update information. Thus, in step S14, a map is rendered according to the map update information and a new map notification is issued. Here, a map is rendered according to the released map update information, and a new map is obtained, which matches the actual environment.
In an embodiment of the present application, in step S11, sensor information and mileage information are acquired, positioning of the robot is performed according to the sensor information and an original map, and a current pose of the robot is determined; acquiring the optimized pose of the robot at the last moment and the optimized pose of the robot at the current moment obtained by optimizing the current pose; and judging whether to issue a target positioning event or not according to the sensor information, the mileage information, the pose optimized at the last moment, the current pose and the original map. The positioning module of the robot acquires sensor information z and mileage information u, positions the robot by using the positioning module to determine the current pose of the robot, and determines the current pose of the robot according to the position of the robotCurrent pose x of robottAnd an original map m, optimizing the pose at each moment by using a Monte Carlo method to obtain the optimized pose of the robot, wherein the optimized pose at the current moment is
Figure 923379DEST_PATH_IMAGE001
The pose optimized at the last moment is
Figure 363588DEST_PATH_IMAGE002
Therefore, whether the low positioning event is generated or not is judged according to the sensor information, the mileage information, the pose optimized at the last moment, the current pose and the original map, and the low positioning event is issued. The specific judgment process is as follows:
judging whether the particle convergence value corresponding to the current pose is smaller than a convergence threshold value or not according to the sensor information, the mileage information, the optimized pose at the last moment, the current pose and an original map, and judging the pose distance between the optimized pose at the last moment and the optimized pose at the current moment if the particle convergence value corresponding to the current pose is not smaller than the convergence threshold value; and if the pose distance is smaller than a distance threshold, evaluating a matching value of a local map obtained by current observation and an original map according to a laser matching method, and if the matching value is smaller than a preset threshold, issuing a target positioning event. The pose of the robot after the current moment is optimized is obtained by optimizing the Monte Carlo method
Figure 433175DEST_PATH_IMAGE001
And judging the convergence condition of the particles according to the following formula:
Figure 332735DEST_PATH_IMAGE003
wherein the content of the first and second substances,
Figure 942708DEST_PATH_IMAGE002
showing the optimized pose at the last moment,
Figure 175106DEST_PATH_IMAGE004
a prediction representing the position of the robot,
Figure 353278DEST_PATH_IMAGE005
an observation representing the current position of the robot,
Figure 835075DEST_PATH_IMAGE006
probability estimation representing the position of the robot when
Figure 389684DEST_PATH_IMAGE007
The convergence represents the convergence of the particles, and if the convergence condition of the particles is larger than a certain threshold value, the pose of the last low positioning event is judged
Figure 273327DEST_PATH_IMAGE008
And the pose optimized at the current moment
Figure 622399DEST_PATH_IMAGE009
A distance of (i) that
Figure 794755DEST_PATH_IMAGE010
And if the distance d is smaller than the distance threshold, evaluating a matching value between the current observation and the original map by using a laser matching method, and if the matching value is lower than a preset threshold, sending a low positioning event. The distance threshold may be configured by a configuration file, such as a diameter equal to that of the robot, or may be a size of a map block when the map is divided. When matching is performed, a local map is generated by observation, the local map and an original map are matched by using an icp (iterative closest point) algorithm, a matching value is calculated according to a matching result, a range of a preset threshold corresponding to the matching value can be [0,1], and a corresponding relation between the current observation and the original map in which a percentage of points exist is shown after matching is performed.
In an embodiment of the present application, the method includes: and judging whether the particle convergence value corresponding to the current pose is smaller than a convergence threshold value or not according to the sensor information, the mileage information, the optimized pose at the last moment, the current pose and the original map, and if so, generating a difference map of the current environment according to the sensor information. If the particle convergence condition is judged to be smaller than the convergence threshold value, a difference map is constructed by utilizing the observation information and the current robot posture, and the particle convergence condition is judged again according to the difference map.
In an embodiment of the present application, the method includes: and if the pose distance is greater than the distance threshold, re-acquiring the sensor information and the mileage information, and judging whether the particle convergence value corresponding to the current pose is smaller than the convergence threshold according to the re-acquired information. Here, the pose optimized at the last time if the low positioning event was last sent
Figure 12110DEST_PATH_IMAGE011
And the pose optimized at the current moment
Figure 389739DEST_PATH_IMAGE012
If the distance is greater than the distance threshold, the sensor information and the mileage information need to be acquired again, the robot is positioned according to the sensor information and the original map, and the particle convergence condition is judged again.
In the embodiment of the application, the pose distance between the pose optimized at the last moment and the pose optimized at the current moment can be judged firstly; when the pose distance is smaller than a distance threshold, the next step is carried out: and evaluating a matching value of the local map obtained by current observation and the original map according to a laser matching method, and issuing a target positioning event if the matching value is smaller than a preset threshold value. Further, the following steps may be performed simultaneously: calculating the pose distance between the pose optimized at the first moment and the pose optimized at the current moment, and calculating the matching value of the local map and the original map obtained by current observation; then, simultaneously judging whether the distance and the matching value both meet the condition (are smaller than the corresponding threshold value); the execution sequence of the above steps in the embodiments of the present application is only an example.
The following description will be given by taking as an example the step of simultaneously calculating the distance and the matching value, i.e. simultaneously calculating the optimized value at the previous timeAnd simultaneously judging whether the distance and the matching value both meet the conditions or not according to the pose distance between the pose and the pose optimized at the current moment and the matching value of the local map and the original map obtained by current observation. Specifically, fig. 3 shows a schematic flow chart of determining a low-position event in an embodiment of the present application, and in step S1, the robot positioning module obtains sensor observation information z and mileage information u according to the current pose x of the robottAnd an original map m, and optimizing by using a Monte Carlo method to obtain the position and posture of the robot after the current time is optimized
Figure 34347DEST_PATH_IMAGE013
If the convergence of the particles is higher than a certain threshold, go to step S2; otherwise, performing step S4; wherein the particles converge:
Figure 693998DEST_PATH_IMAGE014
wherein the content of the first and second substances,
Figure 324831DEST_PATH_IMAGE015
and representing the optimized pose at the last moment.
Step S2, judging the position and posture of the last time of sending the low positioning event and after the last time of optimization
Figure 183066DEST_PATH_IMAGE016
And the pose after current optimization
Figure 201837DEST_PATH_IMAGE017
The distance of (d);
step S3, evaluating the matching value of the current observation and the original map m by using a laser matching method, and if the distance in the step S2 is less than a threshold value and the matching value is lower than the threshold value, sending a low positioning event to a map difference calculation module; otherwise, jumping to step S1; the method comprises the following specific processes:
Figure 286468DEST_PATH_IMAGE018
Figure 845625DEST_PATH_IMAGE019
Figure 761629DEST_PATH_IMAGE020
wherein q isiRepresenting laser observations, R and t representing robot rotation and translation in solution, RpiIndicating rotation of the map point, e (R, t) indicating the error after laser matching, cov (R, t) indicating the covariance of the results R and t, with smaller indicating higher confidence in the matching values, and x indicating a low localization event or not.
And step S4, constructing a difference map by using the observation information and the current robot posture, and jumping to step S1.
In an embodiment of the present application, in step S12, the original map is divided into a plurality of map tiles; determining a map block where the target positioning event is located and adjacent map blocks to determine a differential map block group; calculating the difference between each map block in the differential map block group and the map of the area corresponding to the original map to obtain a first difference value; and if the first difference value is larger than a first threshold value, releasing map difference information belonging to the map block corresponding to the first difference value. Here, as shown in fig. 4, the original map is divided into a plurality of map blocks (blocks), and the map difference calculation module receives the low location event E sent by the location modulelowCalculate ElowAnd its neighbors, wherein,
Figure 357826DEST_PATH_IMAGE021
blocks={block+δ(x,y)|x,y∈[-1,1]}
where r represents the resolution of each block,
Figure 54387DEST_PATH_IMAGE022
indicating the location where a low location event occurred,
Figure 525557DEST_PATH_IMAGE023
representing the origin of the map;
Figure 561646DEST_PATH_IMAGE024
representing the relative position between a block where a low positioning event is located and a block adjacent to the periphery of the block where the low positioning event is located, wherein the blocks refer to all blocks adjacent to the periphery of the block where the low positioning event is located, for example, when x is 0 and y is 1, (block + delta (x, y)) represents that a block above the block where the current low positioning event is located is obtained; therefore, the values of x and y are respectively taken to obtain the adjacent map block sets around.
And calculating the difference between the difference map corresponding to the blocks and the map of the corresponding area in the original map m, wherein the corresponding difference map is constructed by the system under the condition that the robot is well positioned and is used for reflecting the condition of the current environment, the difference map is divided according to the size of the block, the block comprises information including horizontal and vertical coordinates sizeX and sizeY, and coordinates x _ m and y _ m of the original map corresponding to the starting point, so that the corresponding area can be calculated between the block of the difference map and the original map m. If the block difference of a certain low positioning event is larger than the threshold value, the difference information belonging to the block is issued, otherwise, the low positioning event E is received againlowRecalculating new ElowIts neighbors blocks.
In an embodiment of the present application, in step S13, based on the received map difference information, the system-built map information is requested, the stored map information is obtained, and a difference between the stored map information and the map difference information is calculated to obtain a second difference value; and if the second difference value is larger than a second threshold value, releasing map updating information. Here, as shown in fig. 5, the map difference analysis module receives the difference information, queries a difference map in the storage device, acquires the difference map of the corresponding area if the result is null, and issues map update information, otherwise acquires the difference map of the corresponding area of the robot and compares the acquired difference map with the queried storage result, evaluates the current environmental change, issues the update information if the environmental change is greater than a threshold, and otherwise saves the difference map acquired by the current robot. When the stored results of the difference map and the query are compared to evaluate the current environmental change, the method comprises the following steps:
dividing the difference map and the map storing the result into map blocks b1 and b2, judging whether the sizes of b1 and b2 are equal, if so, traversing b1 and b2 to calculate the difference of the two maps, and calculating the numerical values of the environment in the two maps, namely changing the occupied point from the occupied point to the idle point, changing the idle point to the occupied point, changing the occupied point to the unknown point from the unknown point and changing the unknown point to the idle point, so as to quantify the environment change between the two map blocks; if the two map block changes are greater than the configured threshold, the environment is considered to have changed significantly. The threshold value of the change of the environment can be adjusted according to the scene where the robot is located, if the scene where the robot is located is a scene with less change, such as a warehouse, the threshold value can be adjusted to be lower, such as 0.05, that is, more than 5% of map types in the block are changed, the scene is considered to be changed.
In addition, the embodiment of the application also provides a computer readable medium, on which computer readable instructions are stored, and the computer readable instructions can be executed by a processor to implement the foregoing method for robot map difference updating.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, implemented using Application Specific Integrated Circuits (ASICs), general purpose computers or any other similar hardware devices. In one embodiment, the software programs of the present application may be executed by a processor to implement the steps or functions described above. Likewise, the software programs (including associated data structures) of the present application may be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Additionally, some of the steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
In addition, some of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application through the operation of the computer. Program instructions which invoke the methods of the present application may be stored on a fixed or removable recording medium and/or transmitted via a data stream on a broadcast or other signal-bearing medium and/or stored within a working memory of a computer device operating in accordance with the program instructions. An embodiment according to the present application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or a solution according to the aforementioned embodiments of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (7)

1. A method of robotic map differential update, the method comprising:
positioning the robot according to the sensor information and the original map, and issuing a target positioning event according to a positioning result;
calculating map difference information based on the target positioning event;
releasing map updating information according to the stored map information and the map difference information;
rendering a map according to the map updating information and issuing a new map notification;
the method comprises the following steps of positioning the robot according to sensor information and an original map, and issuing a target positioning event according to a positioning result, wherein the method comprises the following steps:
acquiring sensor information and mileage information, positioning the robot according to the sensor information and an original map, and determining the current pose of the robot;
acquiring a pose of the robot after the last time optimization and a pose of the robot after the current time optimization obtained by optimizing the current pose;
judging whether the particle convergence value corresponding to the current pose is smaller than a convergence threshold value or not according to the sensor information, the mileage information, the optimized pose at the last moment, the current pose and an original map, and judging the pose distance between the optimized pose at the last moment and the optimized pose at the current moment if the particle convergence value corresponding to the current pose is not smaller than the convergence threshold value;
and if the pose distance is smaller than a distance threshold, evaluating a matching value of a local map obtained by current observation and an original map according to a laser matching method, and if the matching value is smaller than a preset threshold, issuing a target positioning event, wherein the target positioning event is a low positioning event, and the low positioning event represents that the difference existing in the current observed environment of the robot exceeds the threshold or the robot particles are not converged.
2. The method according to claim 1, characterized in that it comprises:
and judging whether the particle convergence value corresponding to the current pose is smaller than a convergence threshold value or not according to the sensor information, the mileage information, the optimized pose at the last moment, the current pose and the original map, and if so, generating a difference map of the current environment according to the sensor information.
3. The method according to claim 1, characterized in that it comprises:
and if the pose distance is greater than the distance threshold, re-acquiring the sensor information and the mileage information, and judging whether the particle convergence value corresponding to the current pose is smaller than the convergence threshold according to the re-acquired information.
4. The method of claim 1, wherein calculating map discrepancy information based on the target positioning event comprises:
dividing the original map into a plurality of map tiles;
determining a map block where the target positioning event is located and adjacent map blocks to determine a differential map block group;
calculating the difference between each map block in the differential map block group and the map of the area corresponding to the original map to obtain a first difference value;
and if the first difference value is larger than a first threshold value, releasing map difference information belonging to the map block corresponding to the first difference value.
5. The method of claim 1, wherein distributing map update information based on stored map information and the map difference information comprises:
based on received map difference information, requesting system built-in map information, acquiring stored map information, and calculating the difference between the stored map information and the map difference information to obtain a second difference value;
and if the second difference value is larger than a second threshold value, releasing map updating information.
6. An apparatus for robotic map differential update, the apparatus comprising: a positioning module, a map difference calculating module, a map difference analyzing module and a map updating module,
the positioning module is used for positioning the robot according to the sensor information and the original map and issuing a target positioning event to the map difference calculation module according to a positioning result;
the map difference calculation module is used for receiving the target positioning event, calculating map difference information and issuing the map difference information;
the map difference analysis module is used for issuing map updating information according to the received map difference information and the stored map information;
the map updating module is used for rendering a map according to the map updating information and issuing a new map notification;
wherein the positioning module is configured to:
acquiring sensor information and mileage information, positioning the robot according to the sensor information and an original map, and determining the current pose of the robot;
acquiring a pose of the robot after the last time optimization and a pose of the robot after the current time optimization obtained by optimizing the current pose;
judging whether the particle convergence value corresponding to the current pose is smaller than a convergence threshold value or not according to the sensor information, the mileage information, the optimized pose at the last moment, the current pose and an original map, and judging the pose distance between the optimized pose at the last moment and the optimized pose at the current moment if the particle convergence value corresponding to the current pose is not smaller than the convergence threshold value;
and if the pose distance is smaller than a distance threshold, evaluating a matching value of a local map obtained by current observation and an original map according to a laser matching method, and if the matching value is smaller than a preset threshold, issuing a target positioning event, wherein the target positioning event is a low positioning event, and the low positioning event represents that the difference existing in the current observed environment of the robot exceeds the threshold or the robot particles are not converged.
7. A computer readable medium having computer readable instructions stored thereon which are executable by a processor to implement the method of any one of claims 1 to 5.
CN202011035189.XA 2020-09-27 2020-09-27 Robot map difference updating method and device Active CN111928866B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011035189.XA CN111928866B (en) 2020-09-27 2020-09-27 Robot map difference updating method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011035189.XA CN111928866B (en) 2020-09-27 2020-09-27 Robot map difference updating method and device

Publications (2)

Publication Number Publication Date
CN111928866A CN111928866A (en) 2020-11-13
CN111928866B true CN111928866B (en) 2021-02-12

Family

ID=73334284

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011035189.XA Active CN111928866B (en) 2020-09-27 2020-09-27 Robot map difference updating method and device

Country Status (1)

Country Link
CN (1) CN111928866B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112859874B (en) * 2021-01-25 2024-04-30 上海思岚科技有限公司 Dynamic environment area operation and maintenance method and equipment for mobile robot
CN113783945A (en) * 2021-08-25 2021-12-10 深圳拓邦股份有限公司 Map synchronization method and device for mobile robot and mobile robot
CN113932790A (en) * 2021-09-01 2022-01-14 北京迈格威科技有限公司 Map updating method, device, system, electronic equipment and storage medium
CN115512065B (en) * 2022-11-17 2023-05-05 之江实验室 Real-time map construction method and device based on blocking large-scale scene

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010134835A (en) * 2008-12-08 2010-06-17 Renesas Electronics Corp Map drawing device and display control method of map data
CN102538779A (en) * 2010-10-25 2012-07-04 株式会社日立制作所 Robot system and map updating method
CN104699099A (en) * 2009-08-31 2015-06-10 Neato机器人技术公司 Method and apparatus for simultaneous localization and mapping of mobile robot environment
CN110763239A (en) * 2019-11-14 2020-02-07 华南智能机器人创新研究院 Filtering combined laser SLAM mapping method and device
CN110763245A (en) * 2019-10-25 2020-02-07 江苏海事职业技术学院 Map creating method and system based on stream computing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010134835A (en) * 2008-12-08 2010-06-17 Renesas Electronics Corp Map drawing device and display control method of map data
CN104699099A (en) * 2009-08-31 2015-06-10 Neato机器人技术公司 Method and apparatus for simultaneous localization and mapping of mobile robot environment
CN102538779A (en) * 2010-10-25 2012-07-04 株式会社日立制作所 Robot system and map updating method
CN110763245A (en) * 2019-10-25 2020-02-07 江苏海事职业技术学院 Map creating method and system based on stream computing
CN110763239A (en) * 2019-11-14 2020-02-07 华南智能机器人创新研究院 Filtering combined laser SLAM mapping method and device

Also Published As

Publication number Publication date
CN111928866A (en) 2020-11-13

Similar Documents

Publication Publication Date Title
CN111928866B (en) Robot map difference updating method and device
CN107728615B (en) self-adaptive region division method and system
US8971641B2 (en) Spatial image index and associated updating functionality
WO2019062651A1 (en) Localization and mapping method and system
CN109190573B (en) Ground detection method applied to unmanned vehicle, electronic equipment and vehicle
US11475591B2 (en) Hybrid metric-topological camera-based localization
US11534917B2 (en) Methods, systems, articles of manufacture and apparatus to improve resource utilization for binary tree structures
CN107528904B (en) Method and apparatus for data distributed anomaly detection
US11836861B2 (en) Correcting or expanding an existing high-definition map
CN112198878B (en) Instant map construction method and device, robot and storage medium
CN111061740A (en) Data synchronization method, equipment and storage medium
CN115326051A (en) Positioning method and device based on dynamic scene, robot and medium
CN116453371B (en) Method and device for identifying returning of shared vehicle, computer equipment and storage medium
CN108573510B (en) Grid map vectorization method and device
CN113759348B (en) Radar calibration method, device, equipment and storage medium
CN116107576A (en) Page component rendering method and device, electronic equipment and vehicle
US11353579B2 (en) Method for indicating obstacle by smart roadside unit
CN113902874A (en) Point cloud data processing method and device, computer equipment and storage medium
CN111310824A (en) Multi-angle dense target detection inhibition optimization method and equipment
CN112859874B (en) Dynamic environment area operation and maintenance method and equipment for mobile robot
Chu et al. Convergent application for trace elimination of dynamic objects from accumulated lidar point clouds
CN116579960B (en) Geospatial data fusion method
CN117235089B (en) Map checking method, map checking device, electronic equipment and readable storage medium
US20240119615A1 (en) Tracking three-dimensional geometric shapes
CN111506695B (en) Coordinate direction identification method and system during GPX data processing into surface data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231106

Address after: Room 2007-25, South Building, Yancheng International Venture Capital Center, No. 5, Renmin South Road, Yannan High tech Zone, Yancheng City, Jiangsu Province, 224008 (CND)

Patentee after: Silan Robot (Yancheng) Co.,Ltd.

Address before: 201210 unit 01 and 02, 5 / F, building 4, No. 666 shengxia road and No. 122 Yindong Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai

Patentee before: SHANGHAI SLAMTEC Co.,Ltd.