WO2022142992A1 - 融合定位方法、装置、设备和计算机可读存储介质 - Google Patents

融合定位方法、装置、设备和计算机可读存储介质 Download PDF

Info

Publication number
WO2022142992A1
WO2022142992A1 PCT/CN2021/135015 CN2021135015W WO2022142992A1 WO 2022142992 A1 WO2022142992 A1 WO 2022142992A1 CN 2021135015 W CN2021135015 W CN 2021135015W WO 2022142992 A1 WO2022142992 A1 WO 2022142992A1
Authority
WO
WIPO (PCT)
Prior art keywords
pose
point cloud
robot
road sign
cloud data
Prior art date
Application number
PCT/CN2021/135015
Other languages
English (en)
French (fr)
Inventor
周阳
张涛
陈美文
刘运航
何科君
Original Assignee
深圳市普渡科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市普渡科技有限公司 filed Critical 深圳市普渡科技有限公司
Priority to JP2023521741A priority Critical patent/JP2023545279A/ja
Priority to EP21913733.8A priority patent/EP4206736A1/en
Priority to US18/248,780 priority patent/US20230400585A1/en
Publication of WO2022142992A1 publication Critical patent/WO2022142992A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4808Evaluating distance, position or velocity data
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0234Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons
    • G05D1/0236Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the present application relates to the field of robotics, and in particular, to a fusion positioning method, apparatus, device, and computer-readable storage medium.
  • the 2D lidar positioning method mainly uses the lidar point cloud data and the pre-built 2D grid map to match to realize the robot pose 's estimate.
  • this positioning method also encounters some problems, because there are some long corridors, open spaces and other situations where a certain degree of freedom exceeds the observation range of the two-dimensional lidar in practical application scenarios. In this case, the two-dimensional lidar cannot completely constrain the three degrees of freedom of the robot in the two-dimensional space pose, resulting in an error in the robot pose estimation.
  • a fusion positioning method, apparatus, device, and computer-readable storage medium are provided.
  • a fusion positioning method comprising:
  • the environmental objects are sampled by the two-dimensional laser radar carried by the robot, and the radar point cloud related data of the environmental objects are obtained, and the radar point cloud related data includes the position P i of the road sign Li on the road sign map and the environment.
  • the pose of the robot is iteratively optimized until the error E 1 and the error E 2 are the smallest, and the error E 1 is the position difference between the road sign L i and the road sign L j on the road sign map, so The error E 2 is the difference between the estimated pose of the robot and the global pose observation value G pose .
  • a fusion positioning device comprising:
  • the first acquisition module is used to sample the environmental objects through the two-dimensional laser radar carried by the robot, and obtain the radar point cloud related data of the environmental objects, and the radar point cloud related data includes the road signs L i in the environmental objects.
  • a second acquisition module configured to perform proximity matching between the position P i of the road sign Li on the road sign map and the road sign map, and obtain the position P j of the road sign L j closest to the position P i on the road sign map;
  • the third acquisition module is configured to perform scan matching on a two-dimensional grid map using the dedistorted point cloud data of the environmental objects, and acquire the global pose of the robot in the two-dimensional grid map as the robot The global pose observation value G pose ;
  • an iterative optimization module used for the position P i of the landmark L i on the landmark map, the position P j of the landmark L j on the landmark map, the estimated pose of the robot and the global pose Observing the value G pose , iteratively optimize the pose of the robot until the error E 1 and the error E 2 are the smallest, and the error E 1 is that the road sign Li and the road sign L j are on the road sign map
  • the position difference of , the error E 2 is the difference between the estimated pose of the robot and the global pose observation value G pose .
  • a device comprising a memory, a processor and a computer program stored in the memory and running on the processor, the processor implements the technology of the fusion positioning method described above when the processor executes the computer program program steps.
  • a computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, implements the steps of the technical solution of the above-mentioned fusion positioning method.
  • FIG. 1 is a flowchart of a fusion positioning method provided by an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of a fusion positioning device provided by an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of a device provided by an embodiment of the present application.
  • the robot can be a robot operating in a restaurant, such as a food delivery robot, or a medicine delivery robot operating in a medical place, such as a hospital, or a robot operating in a restaurant. Handling robots working in warehouses and other places, etc.
  • the fusion positioning method mainly includes steps S101 to S104, which are described in detail as follows:
  • Step S101 Sampling the environmental objects by the two-dimensional laser radar carried by the robot, and obtain the radar point cloud related data of the environmental objects, wherein the radar point cloud related data includes the position Pi and Undistorted point cloud data of environmental objects.
  • the environment refers to the environment in which the robot works
  • the environmental objects refer to all objects in the environment in which the robot works, including road signs and other objects set in the environment, such as a certain kind of goods, a tree, a A wall or a table, etc.
  • the road sign can be a rod-shaped object such as a reflective column, a reflective strip, etc., which are set in the working environment of the robot and within the visual range of the two-dimensional lidar, wherein the diameter of the reflective column can be 5 About centimeters, the width of the reflective strips can also be about 5 centimeters.
  • these road signs such as reflective columns and reflective strips should be set at least 3, and try not to set them on the same straight line, and try to keep the spacing larger than 1 meter, etc.
  • the environment objects are sampled by the two-dimensional laser radar carried by the robot, and the acquisition of the radar point cloud related data of the environment objects can be achieved through steps S1011 to S1014, which are described as follows:
  • Step S1011 sampling the environmental objects through the two-dimensional laser radar to obtain original radar point cloud data of the environmental objects.
  • the sampling process of the two-dimensional laser radar carried by the robot is the same as that in the prior art.
  • the laser beam is emitted to the surrounding environment, the current environment is scanned in real time, and the time-of-flight ranging method is used to calculate the distance between the robot and the robot.
  • the above information at the time constitutes the original radar point cloud data of the environmental objects;
  • the road sign Li sampled by the two-dimensional lidar is any road sign in the environment.
  • Step S1012 Based on the relative pose of each point in the original radar point cloud data, de-distort the original radar point cloud data of the environmental object to obtain the de-distorted point cloud data of the environmental object.
  • the origin of the coordinates of each sampled point in the lidar coordinate system is different. For example, when the 2D lidar samples the second point, because the robot generates a displacement at this time, the origin of the lidar coordinate system where the 2D lidar is located also generates a displacement. Similarly, the second point to a scan period At the end of the last point, the origin of the lidar coordinate system is different for each sampling point, which causes the radar point cloud data to be distorted, and this distortion should be eliminated.
  • the original radar point cloud data of the environmental object is de-distorted, and the de-distorted point cloud data of the environmental object can be obtained as follows: after a scanning period is determined, The new origin of the lidar coordinate system where the 2D lidar is located.
  • the new origin of the lidar coordinate system where the 2D lidar is located means that when one scan cycle of the 2D lidar ends and the 2D lidar samples the last point of a frame of lidar point cloud data, it is determined to set the second point at this time.
  • the center point of the 2D lidar is used as the new origin of the lidar coordinate system where the 2D lidar is located.
  • Step S1013 Clustering the undistorted point cloud data of the landmark Li in the environment to obtain the point cloud cluster of the landmark Li.
  • the environmental objects include road signs such as reflective pillars and reflective strips set in the environment. Therefore, the undistorted point cloud data of the environmental objects obtained in step S1012 also includes the undistorted point cloud data of the road signs Li in the environment.
  • one method is to de-distort the original radar point cloud data of the environmental objects to obtain the de-distorted point cloud data of the environmental objects.
  • the laser reflection intensity corresponding to each point in the distorted point cloud data is compared with the preset light intensity threshold, and the points where the laser reflection intensity is less than the preset light intensity threshold are removed, and the laser reflection intensity in the undistorted point cloud data of the surrounding objects is higher than
  • the point cloud data with the preset light intensity threshold that is, greater than or equal to the preset light intensity threshold
  • the point cloud data with the laser reflection intensity higher than the preset light intensity threshold value are used as the undistorted point cloud data of the road signs Li in the environment .
  • the preset light intensity threshold can be set to 0.85. Then, the point cloud data with the laser reflection intensity higher than 0.85 in the dedistorted point cloud data of the environmental objects are retained. These point cloud data with the laser reflection intensity higher than 0.85 are the dedistorted point cloud data of the road signs Li in the environment.
  • the DBSCAN algorithm can be used to cluster the dedistorted point cloud data of the road sign Li in the environment to obtain the point cloud cluster of the road sign Li .
  • Step S1014 Obtain the geometric center coordinates of the point cloud cluster of the road sign Li under the robot body coordinate system, and use the geometric center coordinates of the point cloud cluster of the road sign Li under the robot body coordinate system as the road sign Li on the road sign map. position P i .
  • the point cloud cluster of the landmark Li can be regarded as a mass point, and the point cloud cluster of the landmark Li can be obtained by geometric or physical methods in the robot body coordinate system.
  • the geometric center coordinates of the landmark Li in the robot body coordinate system of the point cloud cluster are taken as the position P i of the landmark Li on the landmark map . Since in the embodiment of the present application, the diameter of the reflective column and the width of the reflective strip are relatively small, by simply obtaining the geometric center of the point cloud cluster of the road sign Li, the geometric center of the road signs such as the reflective column and the reflective strip can be approximately reflected.
  • Step S102 Perform proximity matching between the location P i of the road sign Li on the road sign map in the environment and the road sign map, and obtain the location P j of the road sign L j closest to the location P i on the road sign map.
  • the landmark map refers to a pre-made map that reflects the location of the landmark (Landmark) in the world coordinate system, and a point in the landmark map is regarded as a landmark.
  • the position Pi of the landmark Li on the landmark map is the geometric center coordinate of the point cloud cluster of the landmark Li in the robot body coordinate system.
  • a nearest neighbor algorithm such as kNN (k-Nearest Neighbor, k-nearest neighbor) algorithm can be used to perform proximity matching between the position P i of the road sign Li on the road sign map and the road sign map, and obtain the closest distance to the position Pi on the road sign map.
  • the location P j of the landmark L j .
  • Step S103 scan and match the two-dimensional grid map using the dedistorted point cloud data of the environmental objects, and obtain the global pose of the robot in the two-dimensional grid map as the global pose observation value G pose of the robot.
  • a two-dimensional grid map is also called a two-dimensional grid probability map or a two-dimensional occupancy grid map (Occupancy Grid Map).
  • This kind of map divides the plane into grids, and each grid A grid gives an Occupancy.
  • the so-called occupancy rate refers to the probability that a grid is occupied by obstacles (occupied state), free state (Free state) or between these two states on a two-dimensional occupied grid map.
  • the state occupied by obstacles is represented by 1
  • the state occupied by no obstacles is represented by 0, and the value between 0 and 1 is used to represent the state between the two states.
  • the occupancy rate indicates a grid
  • the probability of being occupied by obstacles when the occupancy rate of a grid is larger, it indicates that the grid is more likely to be occupied by obstacles, and vice versa.
  • the undistorted point cloud data of the environmental object is used to perform scan matching on a two-dimensional grid map, and the global pose of the robot in the two-dimensional grid map is obtained as the global pose observation value G pose of the robot It can be implemented through steps S1031 to S1033, which are described as follows:
  • Step S1031 Determine several candidate poses in the pose search space from the poses of the robot at a moment before the current moment.
  • the so-called robot pose refers to the state of the robot in the world coordinate system, which is represented by position and attitude, specifically the abscissa x, ordinate y and the angle relative to the abscissa of the robot in the world coordinate system.
  • namely (x, y, ⁇ ), represents the pose of the robot at a certain moment.
  • each component of the pose is given an increment ⁇ x, ⁇ y and ⁇ , then there are several candidate poses (x 1 , y 1 , ⁇ 1 ), (x 2 , y 2 , ⁇ 2 ), ..., ( xi , y ) in the pose search space ( ⁇ x, ⁇ y, ⁇ ) i , ⁇ i ), ..., (x n , yn , ⁇ n ).
  • Step S1032 Using each candidate pose among several candidate poses, project the dedistorted point cloud data of the environmental object to a two-dimensional grid map, and calculate the matching score of each candidate pose on the two-dimensional grid map.
  • projecting the undistorted point cloud data of environmental objects to a two-dimensional grid map refers to transforming the undistorted point cloud data of environmental objects to an occupied position on the two-dimensional grid map according to a certain transformation relationship.
  • grid when the candidate poses are different, the undistorted point cloud data of the environmental objects will be transformed to different grids on the two-dimensional grid map.
  • the de-distorted point cloud data of the environmental object is (x j , y j , ⁇ j ), corresponding to the candidate pose (x i , y i , ⁇ i ) i ), the (x j , y j , ⁇ j ) is projected to the grid position on the two-dimensional grid map using means, then:
  • zk is the position on the two-dimensional grid map as The number of the grid.
  • Step S1033 Determine the candidate pose with the highest matching score of several candidate poses on the two-dimensional grid map as the global pose of the robot in the two-dimensional grid map.
  • Step S104 Iteratively optimize the pose of the robot according to the position P i of the road sign Li on the road sign map, the position P j of the road sign L j on the road sign map, the estimated pose of the robot and the global pose observation value G pose . , until the error E 1 and the error E 2 are the smallest, wherein, the error E 1 is the position difference between the road sign Li and the road sign L j on the road sign map, and the error E 2 is the estimated pose of the robot and the observed value of the global pose Difference between G poses .
  • the position of the robot is The pose is iteratively optimized until the error E 1 and the error E 2 are the smallest.
  • the robot's The pose is iteratively optimized until the error E 1 and the error E 2 are the smallest: the position P i of the road sign L i on the road sign map, the position P j of the road sign L j on the road sign map, and the estimated position of the robot.
  • Pose and global pose observations G pose build a particle filter model or an extended Kalman filter model, and use the particle filter model or the extended Kalman filter model to continuously correct the robot's estimated pose until the error E 1 and the error E 2
  • the meaning of the error E 1 and the error E 2 here is the same as the meaning of the error E 1 and the error E 2 mentioned in the foregoing embodiment until the minimum time.
  • the Extended Kalman Filter (Extended Kalman Filter, EKF) is an extended form of the standard Kalman filter in a nonlinear situation, and it is a highly efficient recursive filter, namely an autoregressive filter,
  • the basic idea is to use the Taylor series expansion to linearize the nonlinear system, and then use the Kalman filter framework to filter the data.
  • the particle filter approximates the probability density function by finding a set of random samples that propagate in the state space. The process of using the sample mean instead of the integral operation to obtain the minimum variance estimation of the system state, these samples are vividly called "particles".
  • a particle filter model or an extended Kalman filter model is constructed. , using the particle filter model to continuously correct the estimated pose of the robot until the error E 1 and error E 2 are the smallest.
  • Particle swarm each particle in the particle swarm represents a pose of the robot, and the particle swarm is used to represent the probability distribution of the robot pose; assign a weight to each particle in the current particle swarm, and obtain the first position of the robot according to the assigned weight.
  • an estimated pose wherein the weight is used to characterize the probability that each particle is the current actual pose; based on the first estimated pose, the position P i of the landmark L i on the landmark map, and the position of the landmark L j on the landmark map P j and the global pose observation value G pose , use the preset scan matching model for scan matching to obtain the second estimated pose; continue to correct the second estimated pose until the error E 1 and the error E 2 are obtained when the estimation is obtained The pose is used as the final pose of the robot.
  • the technical solution of the present application uses environmental objects to sample, matches the road sign L j that is closest to the position P i of the road sign Li in the environment, and obtains the road sign Li and the road sign L
  • the position difference E 1 of j on the road sign map uses the dedistorted point cloud data of the environmental objects to perform scan matching on the two-dimensional grid map, and then obtain the estimated pose of the robot and the observed value of the robot's global pose
  • the difference value E 2 between the G poses and finally iteratively optimizes the pose of the robot until the error E 1 and the error E 2 are the smallest.
  • only road signs are used to assist positioning.
  • the technical solution of the present application integrates Road sign positioning and two-dimensional grid map positioning, so that the positioning of the robot is more accurate.
  • FIG. 2 is a fusion positioning device provided by an embodiment of the present application, which may include a first acquisition module 201 , a second acquisition module 202 , a third acquisition module 203 , and an iterative optimization module 204 , which are described in detail as follows:
  • the first acquisition module 201 is used to sample the environmental objects through the two-dimensional laser radar carried by the robot, and obtain the radar point cloud related data of the environmental objects, wherein the radar point cloud related data includes the road signs Li in the environment objects on the road sign map.
  • the second acquisition module 202 is configured to perform proximity matching between the position P i of the road sign Li on the road sign map in the environment and the road sign map, and obtain the position P j of the road sign L j that is closest to the position P i on the road sign map;
  • the third acquisition module 203 is used for scanning and matching on the two-dimensional grid map using the dedistorted point cloud data of the environmental objects, and acquiring the global pose of the robot in the two-dimensional grid map as the global pose observation value G of the robot pose ;
  • the iterative optimization module 204 is used for determining the position of the robot according to the position P i of the road sign Li on the road sign map, the position P j of the road sign L j on the road sign map, the estimated pose of the robot and the global pose observation value G pose .
  • the pose is iteratively optimized until the error E 1 and the error E 2 are the smallest, where the error E 1 is the position difference between the road sign Li and the road sign L j on the road sign map, and the error E 2 is the estimated pose of the robot and the global The difference between the pose observations G pose .
  • the first obtaining module 201 in the example of FIG. 2 may include a sampling unit, a correction unit, a clustering unit and a center obtaining unit, wherein:
  • the sampling unit is used to sample the environmental objects through the two-dimensional lidar to obtain the original radar point cloud data of the environmental objects;
  • the correction unit is used to de-distort the original radar point cloud data of the environmental objects based on the relative pose of each point in the original radar point cloud data, and obtain the de-distorted point cloud data of the environmental objects;
  • the clustering unit is used for clustering the undistorted point cloud data of the landmark Li in the environment to obtain the point cloud cluster of the landmark Li;
  • the center obtaining unit is used to obtain the geometric center coordinates of the point cloud cluster of the road sign Li under the robot body coordinate system, and the geometric center coordinates of the point cloud cluster of the road sign Li under the robot body coordinate system are used as the road mark Li in the coordinate system.
  • the location Pi on the signpost map is used to obtain the geometric center coordinates of the point cloud cluster of the road sign Li under the robot body coordinate system, and the geometric center coordinates of the point cloud cluster of the road sign Li under the robot body coordinate system are used as the road mark Li in the coordinate system.
  • the above-mentioned correction unit may include a new origin determination unit and a multiplication unit, wherein:
  • the new origin determination unit is used to determine the new origin of the lidar coordinate system where the two-dimensional lidar is located after a scanning period;
  • the multiplication unit is used to multiply the relative pose ⁇ T with the coordinates P of each point in the original radar point cloud data in the lidar coordinate system of the old origin, and the multiplied result ⁇ T*P is used as the original radar point
  • the apparatus of the example of FIG. 2 can also include a comparison module and a screening module, wherein:
  • the comparison module is used to compare the laser reflection intensity corresponding to each point in the dedistorted point cloud data of the environmental object with the preset light intensity threshold;
  • the screening module is used to retain the point cloud data whose laser reflection intensity is higher than the preset light intensity threshold in the undistorted point cloud data of the environmental objects, as the undistorted point cloud data of the road signs Li in the environment.
  • the third acquisition module 203 in the example of FIG. 2 may include a search unit, a matching unit and a global pose determination unit, wherein:
  • the search unit is used to determine several candidate poses in the pose search space from the poses of the robot at a moment before the current moment;
  • the matching unit is used to project the dedistorted point cloud data of the environmental objects to the two-dimensional grid map by using each of the several candidate poses, and calculate the matching of each candidate pose on the two-dimensional grid map Score;
  • the global pose determination unit is used for determining the candidate pose with the highest matching score of several candidate poses on the two-dimensional grid map as the global pose of the robot in the two-dimensional grid map.
  • the iterative optimization module 204 illustrated in FIG. 2 may include a problem building unit and a first correcting unit, wherein:
  • the construction unit is used to construct the nonlinear least squares problem with the pose of the robot as the state;
  • the first correction unit is used to iteratively optimize the pose of the robot by taking the error E 1 as the error constraint condition of the nonlinear least squares problem until the error E 1 and the error E 2 are the smallest.
  • the iterative optimization module 204 of the example of FIG. 2 may include a model building unit and a second correction unit, wherein:
  • the model building unit is used to construct a particle filter model based on the position P i of the road sign Li on the road sign map, the position P j of the road sign L j on the road sign map, the estimated pose of the robot and the global pose observation value G pose , or Extended Kalman filter model;
  • the second correction unit is used for using the particle filter model or the extended Kalman filter model to continuously correct the estimated pose of the robot until the error E 1 and the error E 2 are the smallest.
  • the technical solution of the present application uses environmental objects to sample, matches the road sign L j closest to the position Pi of the road sign Li in the environment, and obtains the road sign Li and the road sign L j in the road sign.
  • the position difference E 1 on the map on the other hand, use the de-distorted point cloud data of the environmental objects to scan and match on the two-dimensional grid map, and then obtain the estimated pose of the robot and the observed value G pose of the robot's global pose. Finally, iteratively optimize the pose of the robot until the error E 1 and the error E 2 are the smallest .
  • the technical solution of the present application integrates road sign positioning and Two-dimensional grid map positioning, so that the positioning of the robot is more accurate.
  • FIG. 3 is a schematic structural diagram of a device provided by an embodiment of the present application.
  • the device 3 in this embodiment mainly includes: a processor 30 , a memory 31 , and a computer program 32 stored in the memory 31 and executable on the processor 30 , such as a program for a fusion positioning method.
  • the processor 30 executes the computer program 32 , the steps in the above-mentioned embodiment of the fusion positioning method are implemented, for example, steps S101 to S104 shown in FIG. 1 .
  • each module/unit in the above-mentioned device embodiments are implemented, for example, the first acquisition module 201, the second acquisition module 202, the third acquisition module 203, and the iterative optimization module shown in FIG. 2 204 function.
  • the computer program 32 of the fusion positioning method mainly includes: sampling environmental objects through a two-dimensional laser radar carried by a robot, and obtaining radar point cloud related data of the environmental objects, wherein the radar point cloud related data includes road signs in the environment.
  • the position P j of the nearest road sign L j use the undistorted point cloud data of the environmental objects to perform scan matching on the two-dimensional grid map, and obtain the global pose of the robot in the two-dimensional grid map as the global pose observation of the robot value G pose ; according to the position P i of the road sign Li on the road sign map, the position P j of the road sign L j on the road sign map, the estimated pose of the robot and the global pose observation value G pose , iterate on the pose of the robot Optimize until the error E 1 and the error E 2 are the smallest, where the error E 1 is the position difference between the road sign Li and the road sign L j on the road sign map, and the error E 2 is the robot's estimated pose and global pose observation The difference between the values G pose .
  • the computer program 32 may be divided into one or more modules/units, which are stored in the memory 31 and executed by the processor 30 to complete the present application.
  • One or more modules/units may be a series of computer program instruction segments capable of performing specific functions, the instruction segments being used to describe the execution process of the computer program 32 in the device 3 .
  • the computer program 32 can be divided into the functions of the first acquisition module 201, the second acquisition module 202, the third acquisition module 203 and the iterative optimization module 204 (modules in the virtual device), and the specific functions of each module are as follows:
  • Module 201 is used to sample environmental objects through a two-dimensional laser radar carried by the robot, and obtain radar point cloud related data of the environmental objects, wherein the radar point cloud related data includes the position P of the road sign Li in the environment on the road sign map i and the undistorted point cloud data of the environmental objects;
  • the second acquisition module 202 is used to perform proximity matching between the position P i of the road sign Li in the environment on the road sign map and the road sign map, and obtain the closest distance to the position P i on the road sign map The position P j of the road sign L j ;
  • the third acquisition module 203 is used to perform scan matching on the two-dimensional grid map using the de-distorted point cloud data of the environmental objects to obtain the global pose of the robot in the two-dimensional grid map As the
  • the device 3 may include, but is not limited to, the processor 30 and the memory 31 .
  • FIG. 3 is only an example of the device 3, and does not constitute a limitation to the device 3. It may include more or less components than the one shown in the figure, or combine some components, or different components, such as Computing devices may also include input and output devices, network access devices, buses, and the like.
  • the so-called processor 30 may be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processors, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory 31 may be an internal storage unit of the device 3 , such as a hard disk or a memory of the device 3 .
  • the memory 31 can also be an external storage device of the device 3, such as a plug-in hard disk equipped on the device 3, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, a flash memory card (Flash Card) Wait.
  • the memory 31 may also include both an internal storage unit of the device 3 and an external storage device.
  • the memory 31 is used to store computer programs and other programs and data required by the device.
  • the memory 31 can also be used to temporarily store data that has been output or is to be output.

Abstract

一种融合定位方法,包括:通过机器人搭载的二维激光雷达对环境物进行采样,获取环境物的雷达点云相关数据(S101);将环境物中路标L i在路标地图上的位置P i与路标地图进行邻近匹配,获取路标地图上距离位置P i最近的路标L j的位置P j(S102);使用环境物的去畸变点云数据在二维栅格地图进行扫面匹配,获取机器人在二维栅格地图中的全局位姿作为机器人的全局位姿观测值G pose(S103);根据路标L i在路标地图上的位置P i、路标L j在路标地图上的位置P j、机器人的估计位姿和全局位姿观测值G pose对机器人的位姿进行迭代优化,直至误差E 1和误差E 2最小时为止(S104)。

Description

融合定位方法、装置、设备和计算机可读存储介质
本申请要求于2020年12月29日提交至中国国家知识产权局专利局、申请号为2020115861333、申请名称为“一种融合定位方法、装置、设备和计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及机器人领域,特别涉及一种融合定位方法、装置、设备和计算机可读存储介质。
背景技术
移动机器人在各种场合的应用,精确定位是机器人能够顺利完成任务的关键。一般而言,机器人的定位是基于三维或二维激光雷达实现,其中,二维激光雷达的定位方法主要是使用激光雷达点云数据与预先构建的二维栅格地图进行匹配来实现机器人位姿的估计。然而,这种定位方式也遭遇一些问题,原因在于实际应用场景中存在一些长走廊、空旷等某个自由度超出二维激光雷达观测范围的情形。这种情形下,二维激光雷达无法完全约束机器人在二维空间位姿的三个自由度,从而导致机器人位姿估计错误。
针对机器人在长走廊、空旷环境下机器人定位时遇到的上述问题,一种方案是利用人工制作高反光的路标(Landmark)进行辅助定位,例如,使用反光 板或反光柱进行辅助定位。然而,仅仅使用反光柱或反光板等高反光路标进行辅助定位,仍然无法适配对环境改造要求较高的环境。
申请内容
根据本申请的各种实施例,提供一种融合定位方法、装置、设备和计算机可读存储介质。
一种融合定位方法,包括:
通过机器人搭载的二维激光雷达对环境物进行采样,获取所述环境物的雷达点云相关数据,所述雷达点云相关数据包括所述路标Li在路标地图上的位置P i和所述环境物的去畸变点云数据;
将所述路标L i在路标地图上的位置P i与路标地图进行邻近匹配,获取所述路标地图上距离所述位置P i最近的路标L j的位置P j
使用所述环境物的去畸变点云数据在二维栅格地图进行扫面匹配,获取所述机器人在所述二维栅格地图中的全局位姿作为所述机器人的全局位姿观测值G pose
根据所述路标L i在路标地图上的位置P i、所述路标L j在所述路标地图上的位置P j、所述机器人的估计位姿和所述全局位姿观测值G pose,对所述机器人的位姿进行迭代优化,直至误差E 1和误差E 2最小时为止,所述误差E 1为述路标L i与所述路标L j在所述路标地图上的位置差值,所述误差E 2为所述机器人的估计位姿与所述全局位姿观测值G pose之间的差值。
一种融合定位装置,包括:
第一获取模块,用于通过机器人搭载的二维激光雷达对环境物进行采样,获取所述环境物的雷达点云相关数据,所述雷达点云相关数据包括所述环境物中路标L i在路标地图上的位置P i和所述环境物的去畸变点云数据;
第二获取模块,用于将所述路标L i在路标地图上的位置P i与路标地图进行邻近匹配,获取所述路标地图上距离所述位置P i最近的路标L j的位置P j
第三获取模块,用于使用所述环境物的去畸变点云数据在二维栅格地图进行扫面匹配,获取所述机器人在所述二维栅格地图中的全局位姿作为所述机器人的全局位姿观测值G pose
迭代优化模块,用于根据所述路标L i在路标地图上的位置P i、所述路标L j在所述路标地图上的位置P j、所述机器人的估计位姿和所述全局位姿观测值G pose,对所述机器人的位姿进行迭代优化,直至误差E 1和误差E 2最小时为止,所述误差E 1为述路标L i与所述路标L j在所述路标地图上的位置差值,所述误差E 2为所述机器人的估计位姿与所述全局位姿观测值G pose之间的差值。
一种设备,所述设备包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如上述融合定位方法的技术方案的步骤。
一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如上述融合定位方法的技术方案的步骤。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他实施例的附图。
图1是本申请实施例提供的融合定位方法的流程图;
图2是本申请实施例提供的融合定位装置的结构示意图;
图3是本申请实施例提供的设备的结构示意图。
具体实施方式
为了便于理解本申请,下面将参照相关附图对本申请进行更全面的描述。附图中给出了本申请的较佳实施例。但是,本申请可以以许多不同的形式来实现,并不限于本文所描述的实施例。相反地,提供这些实施例的目的是使对本申请的公开内容的理解更加透彻全面。
除非另有定义,本文所使用的所有的技术和科学术语与属于发明的技术领域的技术人员通常理解的含义相同。本文中在发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在限制本申请。本文所使用的术语“和/或”包括一个或多个相关的所列项目的任意的和所有的组合。
本申请提出了一种融合定位方法,可应用于机器人,该机器人可以是在餐厅作业的机器人,例如,传菜机器人,也可以是在医疗场所,例如医院作业的送药机器人,还可以是在仓库等场所作业的搬运机器人,等等。如附图1所示,融合定位方法主要包括步骤S101至S104,详述如下:
步骤S101:通过机器人搭载的二维激光雷达对环境物进行采样,获取环境物的雷达点云相关数据,其中,雷达点云相关数据包括环境物中路标L i在路标地图上的位置P i和环境物的去畸变点云数据。
在本申请实施例中,环境是指机器人工作的环境,环境物是指机器人工作的环境中的一切物体,包括在环境中设置的路标以及其他物体,例如,某种货 物、一棵树、一堵墙或一张桌子,等等,而路标可以是在机器人工作环境中、二维激光雷达的可视范围内设置的反光柱、反光条等杆状物,其中,反光柱的直径可以为5厘米左右,反光条的宽度也可以为5厘米左右。为了发挥更准确的辅助定位作用,这些反光柱、反光条等路标至少需设置3个以上,并且尽量不要设置在同一直线上,间距也尽量保持大于1米,等等。
作为本申请一个实施例,通过机器人搭载的二维激光雷达对环境物进行采样,获取环境物的雷达点云相关数据可通过步骤S1011至步骤S1014实现,说明如下:
步骤S1011:通过二维激光雷达对环境物进行采样,得到环境物的原始雷达点云数据。
在本申请实施例中,机器人搭载的二维激光雷达的采样过程与现有技术一样,都是通过向周遭环境发射激光束,实时对当前环境进行扫描,采用飞行时间测距法计算得到机器人与环境中路标之间的距离,每一束激光击中环境物时,该激光束相对于机器人的角度以及激光源与被击中环境物之间的距离等信息,多个激光束击中环境物时的上述信息就构成环境物的原始雷达点云数据;二维激光雷达所采样的路标L i是环境中的任意一个路标。
步骤S1012:基于原始雷达点云数据中每个点的相对位姿,对环境物的原始雷达点云数据进行去畸变,得到环境物的去畸变点云数据。
由于二维激光雷达在对环境采样时,机器人在移动,因此,每次采样的点 其在激光雷达坐标系下坐标的原点不一样。例如,二维激光雷达采样第二个点时,由于此时机器人产生了一个位移,二维激光雷达所在激光雷达坐标系的原点也产生了一个位移,同样地,第二个点至一个扫描周期结束时的最后一个点,每次采样的点,激光雷达坐标系的原点都不一样,这就使得雷达点云数据产生了畸变,这种畸变应当消除。具体而言,基于原始雷达点云数据中每个点的相对位姿,对环境物的原始雷达点云数据进行去畸变,得到环境物的去畸变点云数据可以是:确定一个扫描周期后,二维激光雷达所在激光雷达坐标系的新原点,将相对位姿△T与原始雷达点云数据中每个点在旧原点的激光雷达坐标系下的坐标P相乘,相乘后的结果△T*P作为原始雷达点云数据中每个点的坐标,其中,相对位姿△T为原始雷达点云数据中每个点相对于新原点得到的机器人位姿数据,其中,上述确定一个扫描周期后,二维激光雷达所在激光雷达坐标系的新原点,是指二维激光雷达一个扫描周期结束、二维激光雷达采样到一帧激光雷达点云数据最后一个点时,确定将此时二维激光雷达的中心点作为二维激光雷达所在激光雷达坐标系的新原点。
步骤S1013:对环境物中路标L i的去畸变点云数据进行聚类,得到路标L i的点云簇。
如前所述,环境物包括环境中设置的反光柱、反光条等路标,因此,经步骤S1012得到环境物的去畸变点云数据也包括环境物中路标L i的去畸变点云数据。至于如何提取环境物中路标L i的去畸变点云数据,一种方法是在对环境物 的原始雷达点云数据进行去畸变,得到环境物的去畸变点云数据之后,将环境物的去畸变点云数据中各个点对应的激光反光强度与预设光强阈值相比,去除其中激光反光强度小于预设光强阈值的点,保留环境物的去畸变点云数据中激光反光强度高于预设光强阈值(即大于或等于预设光强阈值)的点云数据,将这些激光反光强度高于预设光强阈值的点云数据作为环境物中路标L i的去畸变点云数据。例如,若将二维激光雷达能够检测到的激光反光强度假设为0至1,由于反光柱、反光条等路标的激光反光强度一般在0.85以上,则可将预设光强阈值设置为0.85,然后,保留环境物的去畸变点云数据中激光反光强度高于0.85的点云数据,这些激光反光强度高于0.85的点云数据就是环境物中路标L i的去畸变点云数据。
考虑到具有噪声的基于密度的聚类(Density-Based Spatial Clustering of Applications with Noise,DBSCAN)算法具有能够对任意形状的稠密数据集进行聚类,并且在聚类的同时发现异常点,聚类结果鲜有发生偏倚等优点,在本申请实施例中,可以采用DBSCAN算法对环境物中路标L i的去畸变点云数据进行聚类,得到路标L i的点云簇。
步骤S1014:求取路标L i的点云簇在机器人本体坐标系下的几何中心坐标,将路标L i的点云簇在机器人本体坐标系下的几何中心坐标作为路标L i在路标地图上的位置P i
在经过聚类,得到路标L i的点云簇之后,可以将路标L i的点云簇视为一个 质点,可以用几何或者物理方法求取路标L i的点云簇在机器人本体坐标系下的几何中心坐标,将路标L i的点云簇在机器人本体坐标系下的几何中心坐标作为路标L i在路标地图上的位置P i。由于本申请实施例中,反光柱直径和反光条宽度都比较小,通过简单地求取路标L i的点云簇几何中心,基本能够近似地反映反光柱、反光条等路标的几何中心。
步骤S102:将环境物中路标L i在路标地图上的位置P i与路标地图进行邻近匹配,获取路标地图上距离位置P i最近的路标L j的位置P j
在本申请实施例中,路标地图是指预先制作的反映路标(Landmark)在世界坐标系中位置的地图,路标地图中的点即视为一个路标。如前所述,路标L i在路标地图上的位置P i即路标L i的点云簇在机器人本体坐标系下的几何中心坐标。具体地,可以采用最近邻算法,例如kNN(k-Nearest Neighbor,k最近邻)算法将路标L i在路标地图上的位置P i与路标地图进行邻近匹配,获取路标地图上距离位置P i最近的路标L j的位置P j
步骤S103:使用环境物的去畸变点云数据在二维栅格地图进行扫面匹配,获取机器人在二维栅格地图中的全局位姿作为机器人的全局位姿观测值G pose
在本申请实施例中,二维栅格地图也称为二维栅格概率地图或二维占据栅格地图(Occupancy Grid Map),这种地图是将平面划分为一个个栅格,每个栅格赋予一个占据率(Occupancy)。所谓占据率,是指二维占据栅格地图上,一个栅格被障碍物占据的状态(occupied状态)、无障碍物占据的状态(Free状态) 或者介于这两种状态之间的概率,其中,被障碍物占据的状态使用1表示,无障碍物占据的状态使用0表示,介于两种状态之间则使用0~1之间的值来表示,显然,占据率表明了一个栅格被障碍物占据的可能性大小,当一个栅格的占据率越大,表明该栅格被障碍物占据的可能性越大,反之则反。
作为本申请一个实施例,使用环境物的去畸变点云数据在二维栅格地图进行扫面匹配,获取机器人在二维栅格地图中的全局位姿作为机器人的全局位姿观测值G pose可以通过步骤S1031至步骤S1033实现,说明如下:
步骤S1031:由机器人当前时刻前一时刻的位姿确定位姿搜索空间中的若干候选位姿。
所谓机器人的位姿,是指机器人在世界坐标系中的状态,该状态使用位置和姿态表示,具体是以机器人在世界坐标系中的横坐标x、纵坐标y和相对于横轴的夹角θ,即(x,y,θ)表示了机器人在某个时刻的位姿。假设通过估计,得到的机器人当前时刻前一时刻的位姿为(x,y,θ),那么,在该位姿上对其每个分量给予一个增量△x、△y和△θ,则在位姿搜索空间(△x,△y,△θ)中存在若干候选位姿(x 1,y 1,θ 1)、(x 2,y 2,θ 2)、…、(x i,y i,θ i)、…、(x n,y n,θ n)。
步骤S1032:利用若干候选位姿中的每一个候选位姿,将环境物的去畸变点云数据投影至二维栅格地图,计算每一个候选位姿在二维栅格地图上的匹配得分。
此处,将环境物的去畸变点云数据投影至二维栅格地图,是指按照某种变换关系,将环境物的去畸变点云数据变换至二维栅格地图上的某个占据位置或者栅格,当候选位姿不同时,环境物的去畸变点云数据会变换至二维栅格地图上不同的栅格。假设某个候选位姿(x i,y i,θ i),环境物的去畸变点云数据为(x j,y j,θ j),对应于候选位姿(x i,y i,θ i),该(x j,y j,θ j)投影至二维栅格地图上的栅格位置用
Figure PCTCN2021135015-appb-000001
表示,则:
Figure PCTCN2021135015-appb-000002
其中,zk是二维栅格地图上位置为
Figure PCTCN2021135015-appb-000003
的栅格的编号。
而如前所述,不同的栅格被赋予了不同占据率。对应于某个候选位姿(x i,y i,θ i),其在二维栅格地图上的匹配得分为环境物的去畸变点云数据中各个去畸变点云数据投影至二维栅格地图上不同栅格时,这些不同栅格的占据率之和,即,环境物的去畸变点云数据中任意一个去畸变点云数据(x j,y j,θ j)投影至二维栅格地图上位置为
Figure PCTCN2021135015-appb-000004
的栅格,其对应的占据率为O j,对应于候选位姿(x i,y i,θ i),该环境物的去畸变点云数据投影至二维栅格地图时在二维栅格地图上的匹配得分
Figure PCTCN2021135015-appb-000005
其中,m为该环境物的去畸变点云数据的去畸变点云数据个数。
步骤S1033:将若干候选位姿在二维栅格地图上的匹配得分最高的候选位姿确定为机器人在二维栅格地图中的全局位姿。
从步骤S1032分析可知,对于上述n个候选位姿,将得到n个不同的匹配 得分Score i,将n个候选位姿在二维栅格地图上的匹配得分最高的候选位姿确定为机器人在二维栅格地图中的全局位姿,即,选择Score k=max{Score i},将与Score k对应的候选位姿(x k,y k,θ k)确定为机器人在二维栅格地图中的全局位姿,该全局位姿作为机器人的全局位姿观测值G pose
步骤S104:根据路标L i在路标地图上的位置P i、路标L j在路标地图上的位置P j、机器人的估计位姿和全局位姿观测值G pose,对机器人的位姿进行迭代优化,直至误差E 1和误差E 2最小时为止,其中,误差E 1为路标L i与路标L j在路标地图上的位置差值,误差E 2为机器人的估计位姿与全局位姿观测值G pose之间的差值。
作为本申请一个实施例,根据路标L i在路标地图上的位置P i、路标L j在路标地图上的位置P j、机器人的估计位姿和全局位姿观测值G pose,对机器人的位姿进行迭代优化,直至误差E 1和误差E 2最小时为止可以是:以机器人的位姿作为状态构建非线性最小二乘化问题,以误差E 1为非线性最小二乘化问题的误差约束条件,迭代优化机器人的位姿,直至误差E 1和误差E 2最小时为止,此处,误差E 1为路标L i与路标L j在路标地图上的位置差值,误差E 2为机器人的估计位姿与全局位姿观测值G pose之间的差值。
作为本申请另一实施例,根据路标L i在路标地图上的位置P i、路标L j在路标地图上的位置P j、机器人的估计位姿和全局位姿观测值G pose,对机器人的位姿进行迭代优化,直至误差E 1和误差E 2最小时为止还可以是:基于路标L i在路 标地图上的位置P i、路标L j在路标地图上的位置P j、机器人的估计位姿和全局位姿观测值G pose,构建粒子滤波模型或扩展卡尔曼滤波模型,采用粒子滤波模型或扩展卡尔曼滤波模型,持续对机器人的估计位姿进行校正,直至误差E 1和误差E 2最小时为止,此处的误差E 1和误差E 2含义与前述实施例提及的误差E 1和误差E 2的含义相同。在本申请实施例中,扩展卡尔曼滤波(Extended Kalman Filter,EKF)是标准卡尔曼滤波在非线性情形下的一种扩展形式,它是一种高效率的递归滤波器即自回归滤波器,其基本思想是利用泰勒级数展开将非线性系统线性化,然后采用卡尔曼滤波框架对数据进行滤波,粒子滤波是通过寻找一组在状态空间中传播的随机样本来近似地表示概率密度函数,采用样本均值代替积分运算,进而获得系统状态的最小方差估计的过程,这些样本被形象地称为“粒子”。
上述基于路标L i在路标地图上的位置P i、路标L j在路标地图上的位置P j、机器人的估计位姿和全局位姿观测值G pose,构建粒子滤波模型或扩展卡尔曼滤波模型,采用粒子滤波模型,持续对机器人的估计位姿进行校正,直至误差E 1和误差E 2最小时为止可以是:基于机器人的估计位姿和预设时间段内的位姿变化量,获取当前粒子群,粒子群中的每一个粒子表示机器人的一个位姿,粒子群用于表示机器人位姿的概率分布;为当前粒子群中的每一个粒子分配权重,并根据分配的权重获得机器人的第一估计位姿,其中,权重用于表征每一个粒子为当前实际位姿的概率;基于第一估计位姿、路标L i在路标地图上的位置P i、 路标L j在路标地图上的位置P j和全局位姿观测值G pose,采用预设扫描匹配模型进行扫描匹配,得到第二估计位姿;持续对第二估计位姿进行校正,直至误差E 1和误差E 2时得到的估计位姿作为机器人的最终位姿。
从上述附图1示例的融合定位方法可知,本申请的技术方案一方面通过环境物进行采样,匹配与环境物中路标L i的位置P i最近的路标L j,取得路标L i与路标L j在路标地图上的位置差值E 1;另一方面,使用环境物的去畸变点云数据在二维栅格地图进行扫面匹配,进而取得机器人的估计位姿与机器人全局位姿观测值G pose之间的差值E 2,最后对机器人的位姿进行迭代优化,直至误差E 1和误差E 2最小时为止,相对于现有技术仅仅采用路标辅助定位,本申请的技术方案融合了路标定位和二维栅格地图定位,从而使得对机器人的定位更加精确。
请参阅附图2,是本申请实施例提供的一种融合定位装置,可以包括第一获取模块201、第二获取模块202、第三获取模块203和迭代优化模块204,详述如下:
第一获取模块201,用于通过机器人搭载的二维激光雷达对环境物进行采样,获取环境物的雷达点云相关数据,其中,雷达点云相关数据包括环境物中路标L i在路标地图上的位置P i和环境物的去畸变点云数据;
第二获取模块202,用于将环境物中路标L i在路标地图上的位置P i与路标地图进行邻近匹配,获取路标地图上距离位置P i最近的路标L j的位置P j
第三获取模块203,用于使用环境物的去畸变点云数据在二维栅格地图进行 扫面匹配,获取机器人在二维栅格地图中的全局位姿作为机器人的全局位姿观测值G pose
迭代优化模块204,用于根据路标L i在路标地图上的位置P i、路标L j在路标地图上的位置P j、机器人的估计位姿和全局位姿观测值G pose,对机器人的位姿进行迭代优化,直至误差E 1和误差E 2最小时为止,其中,误差E 1为路标L i与路标L j在路标地图上的位置差值,误差E 2为机器人的估计位姿与全局位姿观测值G pose之间的差值。
可选地,附图2示例的第一获取模块201可以包括采样单元、校正单元、聚类单元和中心求取单元,其中:
采样单元,用于通过二维激光雷达对环境物进行采样,得到环境物的原始雷达点云数据;
校正单元,用于基于原始雷达点云数据中每个点的相对位姿,对环境物的原始雷达点云数据进行去畸变,得到环境物的去畸变点云数据;
聚类单元,用于对环境物中路标L i的去畸变点云数据进行聚类,得到路标L i的点云簇;
中心求取单元,用于求取路标L i的点云簇在机器人本体坐标系下的几何中心坐标,将路标L i的点云簇在机器人本体坐标系下的几何中心坐标作为路标L i在路标地图上的位置P i
可选地,上述校正单元可包括新原点确定单元和相乘单元,其中:
新原点确定单元,用于确定一个扫描周期后,二维激光雷达所在激光雷达坐标系的新原点;
相乘单元,用于将相对位姿△T与原始雷达点云数据中每个点在旧原点的激光雷达坐标系下的坐标P相乘,相乘后的结果△T*P作为原始雷达点云数据中每个点的坐标,其中,相对位姿△T为原始雷达点云数据中每个点相对于新原点得到的机器人位姿数据。
可选地,附图2示例的装置还可以包括比较模块和筛选模块,其中:
比较模块,用于将环境物的去畸变点云数据中各个点对应的激光反光强度与预设光强阈值相比;
筛选模块,用于保留环境物的去畸变点云数据中激光反光强度高于预设光强阈值的点云数据,作为环境物中路标L i的去畸变点云数据。
可选地,附图2示例的第三获取模块203可以包括搜索单元、匹配单元和全局位姿确定单元,其中:
搜索单元,用于由机器人当前时刻前一时刻的位姿确定位姿搜索空间中的若干候选位姿;
匹配单元,用于利用若干候选位姿中的每一个候选位姿,将环境物的去畸变点云数据投影至二维栅格地图,计算每一个候选位姿在二维栅格地图上的匹配得分;
全局位姿确定单元,用于将若干候选位姿在二维栅格地图上的匹配得分最高的候选位姿确定为机器人在二维栅格地图中的全局位姿。
可选地,附图2示例的迭代优化模块204可以包括问题构建单元和第一校正单元,其中:
构建单元,用于以机器人的位姿作为状态构建非线性最小二乘化问题;
第一校正单元,用于以误差E 1为非线性最小二乘化问题的误差约束条件,迭代优化机器人的位姿,直至误差E 1和误差E 2最小时为止。
可选地,附图2示例的迭代优化模块204可以包括模型构建单元和第二校正单元,其中:
模型构建单元,用于基于路标L i在路标地图上的位置P i、路标L j在路标地图上的位置P j、机器人的估计位姿和全局位姿观测值G pose,构建粒子滤波模型或扩展卡尔曼滤波模型;
第二校正单元,用于采用粒子滤波模型或扩展卡尔曼滤波模型,持续对机器人的估计位姿进行校正,直至误差E 1和误差E 2最小时为止。
从以上技术方案的描述中可知,本申请的技术方案一方面通过环境物进行采样,匹配与环境物中路标L i的位置P i最近的路标L j,取得路标L i与路标L j在路标地图上的位置差值E 1;另一方面,使用环境物的去畸变点云数据在二维栅格地图进行扫面匹配,进而取得机器人的估计位姿与机器人全局位姿观测值G pose之间的差值E 2,最后对机器人的位姿进行迭代优化,直至误差E 1和误差E 2最小时为止,相对于现有技术仅仅采用路标辅助定位,本申请的技术方案融合了路标定位和二维栅格地图定位,从而使得对机器人的定位更加精确。
请参阅图3,是本申请一实施例提供的设备的结构示意图。如图3所示,该实施例的设备3主要包括:处理器30、存储器31以及存储在存储器31中并可在处理器30上运行的计算机程序32,例如融合定位方法的程序。处理器30执行计算机程序32时实现上述融合定位方法实施例中的步骤,例如图1所示的步骤S101至S104。或者,处理器30执行计算机程序32时实现上述各装置实施例中各模块/单元的功能,例如图2所示第一获取模块201、第二获取模块202、第三获取模块203和迭代优化模块204的功能。
示例性地,融合定位方法的计算机程序32主要包括:通过机器人搭载的二维激光雷达对环境物进行采样,获取环境物的雷达点云相关数据,其中,雷达点云相关数据包括环境物中路标L i在路标地图上的位置P i和环境物的去畸变点云数据;将环境物中路标L i在路标地图上的位置P i与路标地图进行邻近匹配,获取路标地图上距离位置P i最近的路标L j的位置P j;使用环境物的去畸变点云数据在二维栅格地图进行扫面匹配,获取机器人在二维栅格地图中的全局位姿作为机器人的全局位姿观测值G pose;根据路标L i在路标地图上的位置P i、路标L j在路 标地图上的位置P j、机器人的估计位姿和全局位姿观测值G pose,对机器人的位姿进行迭代优化,直至误差E 1和误差E 2最小时为止,其中,误差E 1为路标L i与路标L j在路标地图上的位置差值,误差E 2为机器人的估计位姿与全局位姿观测值G pose之间的差值。计算机程序32可以被分割成一个或多个模块/单元,一个或者多个模块/单元被存储在存储器31中,并由处理器30执行,以完成本申请。一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述计算机程序32在设备3中的执行过程。例如,计算机程序32可以被分割成第一获取模块201、第二获取模块202、第三获取模块203和迭代优化模块204(虚拟装置中的模块)的功能,各模块具体功能如下:第一获取模块201,用于通过机器人搭载的二维激光雷达对环境物进行采样,获取环境物的雷达点云相关数据,其中,雷达点云相关数据包括环境物中路标L i在路标地图上的位置P i和环境物的去畸变点云数据;第二获取模块202,用于将环境物中路标L i在路标地图上的位置P i与路标地图进行邻近匹配,获取路标地图上距离位置P i最近的路标L j的位置P j;第三获取模块203,用于使用环境物的去畸变点云数据在二维栅格地图进行扫面匹配,获取机器人在二维栅格地图中的全局位姿作为机器人的全局位姿观测值G pose;迭代优化模块204,用于根据路标L i在路标地图上的位置P i、路标L j在路标地图上的位置P j、机器人的估计位姿和全局位姿观测值G pose,对机器人的位姿进行迭代优化,直至误差E 1和误差E 2最小时为止,其中,误差E 1为路标L i与路标L j在路标地图上的位置差值,误差E 2为机器人的估计位姿与全局位姿观测值G pose之间的差值。
设备3可包括但不仅限于处理器30、存储器31。本领域技术人员可以理解,图3仅仅是设备3的示例,并不构成对设备3的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如计算设备还可以包括输入输出设备、网络接入设备、总线等。
所称处理器30可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专 用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
存储器31可以是设备3的内部存储单元,例如设备3的硬盘或内存。存储器31也可以是设备3的外部存储设备,例如设备3上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,存储器31还可以既包括设备3的内部存储单元也包括外部存储设备。存储器31用于存储计算机程序以及设备所需的其他程序和数据。存储器31还可以用于暂时地存储已经输出或者将要输出的数据。
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (20)

  1. 一种融合定位方法,所述方法包括:
    通过机器人搭载的二维激光雷达对环境物进行采样,获取所述环境物的雷达点云相关数据,所述雷达点云相关数据包括所述环境物中路标L i在路标地图上的位置P i和所述环境物的去畸变点云数据;
    将所述路标L i在路标地图上的位置P i与路标地图进行邻近匹配,获取所述路标地图上距离所述位置P i最近的路标L j的位置P j
    使用所述环境物的去畸变点云数据在二维栅格地图进行扫面匹配,获取所述机器人在所述二维栅格地图中的全局位姿作为所述机器人的全局位姿观测值G pose
    根据所述路标L i在路标地图上的位置P i、所述路标L j在所述路标地图上的位置P j、所述机器人的估计位姿和所述全局位姿观测值G pose,对所述机器人的位姿进行迭代优化,直至误差E 1和误差E 2最小时为止,所述误差E 1为所述路标L i与所述路标L j在所述路标地图上的位置差值,所述误差E 2为所述机器人的估计位姿与所述全局位姿观测值G pose之间的差值。
  2. 如权利要求1所述融合定位方法,其特征在于,所述通过机器人搭载的二维激光雷达对环境物进行采样,获取所述环境物的雷达点云相关数据,包括:
    通过所述二维激光雷达对所述环境物进行采样,得到所述环境物的原始雷达点云数据;
    基于所述原始雷达点云数据中每个点的相对位姿,对所述环境物的原始雷达点云数据进行去畸变,得到所述环境物的去畸变点云数据;
    对所述环境物中路标L i的去畸变点云数据进行聚类,得到所述路标L i的点云簇;
    求取所述点云簇在所述机器人本体坐标系下的几何中心坐标,将所述点云簇在所述机器人本体坐标系下的几何中心坐标作为所述路标L i在路标地图上的位置P i
  3. 如权利要求2所述融合定位方法,其特征在于,所述基于所述原始雷达点云数据中每个点的相对位姿,对所述环境物的原始雷达点云数据进行去畸变,得到所述环境物的去畸变点云数据,包括:
    确定一个扫描周期后,所述二维激光雷达所在激光雷达坐标系的新原点;
    将相对位姿△T与所述原始雷达点云数据中每个点在旧原点的激光雷达坐标系下的坐标P相乘,相乘后的结果△T*P作为所述原始雷达点云数据中每个点的坐标,所述相对位姿△T为所述原始雷达点云数据中每个点相对于所述新原点得到的机器人位姿数据。
  4. 如权利要求2所述融合定位方法,其特征在于,所述对所述环境物的原始雷达点云数据进行去畸变,得到所述环境物的去畸变点云数据之后,所述方法还包括:
    将所述环境物的去畸变点云数据中各个点对应的激光反光强度与预设光强阈值相比;
    保留所述环境物的去畸变点云数据中激光反光强度高于所述预设光强阈值的点云数据,作为所述环境物中路标L i的去畸变点云数据。
  5. 如权利要求1所述融合定位方法,其特征在于,所述使用所述环境物的去畸变点云数据在二维栅格地图进行扫面匹配,获取所述机器人在所述二维栅格地图中的全局位姿作为所述机器人的全局位姿观测值G pose,包括:
    由所述机器人当前时刻前一时刻的位姿确定位姿搜索空间中的若干候选位姿;
    利用所述若干候选位姿中的每一个候选位姿,将所述环境物的去畸变点云数据投影至所述二维栅格地图,计算所述每一个候选位姿在所述二维栅格地图上的匹配得分;
    将所述若干候选位姿在所述二维栅格地图上的匹配得分最高的候选位姿确定为所述机器人在所述二维栅格地图中的全局位姿。
  6. 如权利要求1所述融合定位方法,其特征在于,所述根据所述路标L i在路标地图上的位置P i、所述路标L j在所述路标地图上的位置P j、所述机器人的估计位姿和所述全局位姿观测值G pose,对所述机器人的位姿进行迭代优化,直至误差E 1和误差E 2最小时为止,包括:
    以所述机器人的位姿作为状态构建非线性最小二乘化问题;
    以所述误差E 1为所述非线性最小二乘化问题的误差约束条件,迭代优化所述机器人的位姿,直至所述误差E 1和所述误差E 2最小时为止。
  7. 如权利要求1所述融合定位方法,其特征在于,所述根据所述路标L i在路标地图上的位置P i、所述路标L j在所述路标地图上的位置P j、所述机器人的估计位姿和所述全局位姿观测值G pose,对所述机器人的位姿进行迭代优化,直至误差E 1和误差E 2最小时为止,包括:
    基于所述路标L i在路标地图上的位置P i、所述路标L j在所述路标地图上的位置P j、所述机器人的估计位姿和所述全局位姿观测值G pose,构建粒子滤波模型或扩展卡尔曼滤波模型;
    采用所述粒子滤波模型或扩展卡尔曼滤波模型,持续对所述机器人的估计位姿进行校正,直至误差E 1和误差E 2最小时为止。
  8. 一种融合定位装置,所述装置包括:
    第一获取模块,用于通过机器人搭载的二维激光雷达对环境物进行采样,获取所述环境物的雷达点云相关数据,所述雷达点云相关数据包括所述环境物中路标L i在路标地图上的位置P i和所述环境物的去畸变点云数据;
    第二获取模块,用于将所述路标L i在路标地图上的位置P i与路标地图进行邻近匹配,获取所述路标地图上距离所述位置P i最近的路标L j的位置P j
    第三获取模块,用于使用所述环境物的去畸变点云数据在二维栅格地图进行扫面匹配,获取所述机器人在所述二维栅格地图中的全局位姿作为所述机器人的全局位姿观测值G pose
    迭代优化模块,用于根据所述路标L i在路标地图上的位置P i、所述路标L j在所述路标地图上的位置P j、所述机器人的估计位姿和所述全局位姿观测值G pose,对所述机器人的位姿进行迭代优化,直至误差E 1和误差E 2最小时为止,所述误差E 1为所述路标L i与所述路标L j在所述路标地图上的位置差值,所述误差E 2为所述机器人的估计位姿与所述全局位姿观测值G pose之间的差值。
  9. 如权利要求8所述融合定位装置,其特征在于,所述第一获取模块包括:
    采样单元,用于通过所述二维激光雷达对所述环境物进行采样,得到所述环境物的原始雷达点云数据;
    校正单元,用于基于所述原始雷达点云数据中每个点的相对位姿,对所述环境物的原始雷达点云数据进行去畸变,得到所述环境物的去畸变点云数据;
    聚类单元,用于对所述环境物中路标L i的去畸变点云数据进行聚类,得到所述路标L i的点云簇;
    中心求取单元,用于求取所述点云簇在所述机器人本体坐标系下的几何中心坐标,将所述点云簇在所述机器人本体坐标系下的几何中心坐标作为所述路标L i在路标地图上的位置P i
  10. 如权利要求9所述融合定位装置,其特征在于,所述校正单元包括:
    新原点确定单元,用于确定一个扫描周期后,所述二维激光雷达所在激光雷达坐标系的新原点;
    相乘单元,用于将相对位姿△T与所述原始雷达点云数据中每个点在旧原点的激光雷达坐标系下的坐标P相乘,相乘后的结果△T*P作为所述原始雷达点云数据中每个点的坐标,所述相对位姿△T为所述原始雷达点云数据中每个点相对于所述新原点得到的机器人位姿数据。
  11. 如权利要求9所述融合定位装置,其特征在于,所述装置还包括:
    比较模块,用于将所述环境物的去畸变点云数据中各个点对应的激光反光强度与预设光强阈值相比;
    筛选模块,用于保留所述环境物的去畸变点云数据中激光反光强度高于所述预设光强阈值的点云数据,作为所述环境物中路标L i的去畸变点云数据。
  12. 如权利要求8所述融合定位装置,其特征在于,所述第三获取模块包括:
    搜索单元,用于由所述机器人当前时刻前一时刻的位姿确定位姿搜索空间中的若干候选位姿;
    匹配单元,用于利用所述若干候选位姿中的每一个候选位姿,将所述环境物的去畸变点云数据投影至所述二维栅格地图,计算所述每一个候选位姿在所述二维栅格地图上的匹配得分;
    全局位姿确定单元,用于将所述若干候选位姿在所述二维栅格地图上的匹配得分最高的候选位姿确定为所述机器人在所述二维栅格地图中的全局位姿。
  13. 如权利要求8所述融合定位装置,其特征在于,所述迭代优化模块包括:
    构建单元,用于以所述机器人的位姿作为状态构建非线性最小二乘化问题;
    第一校正单元,用于以所述误差E 1为所述非线性最小二乘化问题的误差约束条件,迭代优化所述机器人的位姿,直至所述误差E 1和所述误差E 2最小时为止。
  14. 如权利要求8所述融合定位装置,其特征在于,所述迭代优化模块包括:
    模型构建单元,用于基于所述路标L i在路标地图上的位置P i、所述路标L j在所述路标地图上的位置P j、所述机器人的估计位姿和所述全局位姿观测值G pose,构建粒子滤波模型或扩展卡尔曼滤波模型;
    第二校正单元,用于采用所述粒子滤波模型或扩展卡尔曼滤波模型,持续对所述机器人的估计位姿进行校正,直至误差E 1和误差E 2最小时为止。
  15. 一种设备,所述设备包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如权利要求1所述方法的步骤。
  16. 如权利要求15所述设备,其特征在于,所述通过机器人搭载的二维激光雷达对环境物进行采样,获取所述环境物的雷达点云相关数据,包括:
    通过所述二维激光雷达对所述环境物进行采样,得到所述环境物的原始雷达点云数据;
    基于所述原始雷达点云数据中每个点的相对位姿,对所述环境物的原始雷达点云数据进行去畸变,得到所述环境物的去畸变点云数据;
    对所述环境物中路标L i的去畸变点云数据进行聚类,得到所述路标L i的点云簇;
    求取所述点云簇在所述机器人本体坐标系下的几何中心坐标,将所述点云簇在所述机器人本体坐标系下的几何中心坐标作为所述路标L i在路标地图上的位置P i
  17. 如权利要求16所述设备,其特征在于,所述基于所述原始雷达点云数据中每个点的相对位姿,对所述环境物的原始雷达点云数据进行去畸变,得到所述环境物的去畸变点云数据,包括:
    确定一个扫描周期后,所述二维激光雷达所在激光雷达坐标系的新原点;
    将相对位姿△T与所述原始雷达点云数据中每个点在旧原点的激光雷达坐标系下的坐标P相乘,相乘后的结果△T*P作为所述原始雷达点云数据中每个点的坐标,所述相对位姿△T为所述原始雷达点云数据中每个点相对于所述新原点得到的机器人位姿数据。
  18. 如权利要求16所述设备,其特征在于,所述对所述环境物的原始雷达点云数据进行去畸变,得到所述环境物的去畸变点云数据之后,所述处理器还用于执行:
    将所述环境物的去畸变点云数据中各个点对应的激光反光强度与预设光强阈值相比;
    保留所述环境物的去畸变点云数据中激光反光强度高于所述预设光强阈值的点云数据,作为所述环境物中路标L i的去畸变点云数据。
  19. 如权利要求15所述设备,其特征在于,所述使用所述环境物的去畸变点云数据在二维栅格地图进行扫面匹配,获取所述机器人在所述二维栅格地图中的全局位姿作为所述机器人的全局位姿观测值G pose,包括:
    由所述机器人当前时刻前一时刻的位姿确定位姿搜索空间中的若干候选位姿;
    利用所述若干候选位姿中的每一个候选位姿,将所述环境物的去畸变点云数据投影至所述二维栅格地图,计算所述每一个候选位姿在所述二维栅格地图上的匹配得分;
    将所述若干候选位姿在所述二维栅格地图上的匹配得分最高的候选位姿确定为所述机器人在所述二维栅格地图中的全局位姿。
  20. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1所述方法的步骤。
PCT/CN2021/135015 2020-12-29 2021-12-02 融合定位方法、装置、设备和计算机可读存储介质 WO2022142992A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2023521741A JP2023545279A (ja) 2020-12-29 2021-12-02 融合測位方法、装置、デバイス、及びコンピューター可読記憶媒体
EP21913733.8A EP4206736A1 (en) 2020-12-29 2021-12-02 Fusion positioning method and apparatus, device and computer-readable storage medium
US18/248,780 US20230400585A1 (en) 2020-12-29 2021-12-02 Fusion positioning method and apparatus, device and computer-readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011586133.3A CN112764053B (zh) 2020-12-29 2020-12-29 一种融合定位方法、装置、设备和计算机可读存储介质
CN202011586133.3 2020-12-29

Publications (1)

Publication Number Publication Date
WO2022142992A1 true WO2022142992A1 (zh) 2022-07-07

Family

ID=75696343

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/135015 WO2022142992A1 (zh) 2020-12-29 2021-12-02 融合定位方法、装置、设备和计算机可读存储介质

Country Status (5)

Country Link
US (1) US20230400585A1 (zh)
EP (1) EP4206736A1 (zh)
JP (1) JP2023545279A (zh)
CN (1) CN112764053B (zh)
WO (1) WO2022142992A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115290098A (zh) * 2022-09-30 2022-11-04 成都朴为科技有限公司 一种基于变步长的机器人定位方法和系统
CN115307646A (zh) * 2022-10-08 2022-11-08 浙江光珀智能科技有限公司 一种多传感器融合的机器人定位方法、系统及装置
CN115421125A (zh) * 2022-11-07 2022-12-02 山东富锐光学科技有限公司 一种基于数据融合的雷达点云数据惯性校正方法
CN116164747A (zh) * 2022-12-15 2023-05-26 广东智能无人系统研究院(南沙) 一种水下机器人的定位与导航方法及系统

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112764053B (zh) * 2020-12-29 2022-07-15 深圳市普渡科技有限公司 一种融合定位方法、装置、设备和计算机可读存储介质
CN114643579B (zh) * 2022-03-29 2024-01-16 深圳优地科技有限公司 一种机器人定位方法、装置、机器人及存储介质
CN115127559A (zh) * 2022-06-28 2022-09-30 广东利元亨智能装备股份有限公司 一种定位方法、装置、设备和存储介质
CN117367419A (zh) * 2022-06-29 2024-01-09 深圳市海柔创新科技有限公司 机器人定位方法、装置和计算可读存储介质
CN117570998B (zh) * 2024-01-17 2024-04-02 山东大学 基于反光柱信息的机器人定位方法及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106908775A (zh) * 2017-03-08 2017-06-30 同济大学 一种基于激光反射强度的无人车实时定位方法
CN110927740A (zh) * 2019-12-06 2020-03-27 合肥科大智能机器人技术有限公司 一种移动机器人定位方法
CN111060092A (zh) * 2019-12-31 2020-04-24 芜湖哈特机器人产业技术研究院有限公司 一种反光柱的快速匹配方法
US20200301015A1 (en) * 2019-03-21 2020-09-24 Foresight Ai Inc. Systems and methods for localization
CN111781609A (zh) * 2020-04-10 2020-10-16 昆山同孚智能技术有限公司 一种agv激光导航多边定位方法
CN112764053A (zh) * 2020-12-29 2021-05-07 深圳市普渡科技有限公司 一种融合定位方法、装置、设备和计算机可读存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108732603B (zh) * 2017-04-17 2020-07-10 百度在线网络技术(北京)有限公司 用于定位车辆的方法和装置
CN108917759A (zh) * 2018-04-19 2018-11-30 电子科技大学 基于多层次地图匹配的移动机器人位姿纠正算法
CN109270544A (zh) * 2018-09-20 2019-01-25 同济大学 基于杆状物识别的移动机器人自定位系统
CN110221603B (zh) * 2019-05-13 2020-08-14 浙江大学 一种基于激光雷达多帧点云融合的远距离障碍物检测方法
CN110487286B (zh) * 2019-08-09 2022-12-20 上海电器科学研究所(集团)有限公司 基于点特征投影与激光点云融合的机器人位姿判断方法
CN111045017B (zh) * 2019-12-20 2023-03-31 成都理工大学 一种激光和视觉融合的巡检机器人变电站地图构建方法
CN111812669B (zh) * 2020-07-16 2023-08-04 南京航空航天大学 绕机检查装置及其定位方法、存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106908775A (zh) * 2017-03-08 2017-06-30 同济大学 一种基于激光反射强度的无人车实时定位方法
US20200301015A1 (en) * 2019-03-21 2020-09-24 Foresight Ai Inc. Systems and methods for localization
CN110927740A (zh) * 2019-12-06 2020-03-27 合肥科大智能机器人技术有限公司 一种移动机器人定位方法
CN111060092A (zh) * 2019-12-31 2020-04-24 芜湖哈特机器人产业技术研究院有限公司 一种反光柱的快速匹配方法
CN111781609A (zh) * 2020-04-10 2020-10-16 昆山同孚智能技术有限公司 一种agv激光导航多边定位方法
CN112764053A (zh) * 2020-12-29 2021-05-07 深圳市普渡科技有限公司 一种融合定位方法、装置、设备和计算机可读存储介质

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115290098A (zh) * 2022-09-30 2022-11-04 成都朴为科技有限公司 一种基于变步长的机器人定位方法和系统
CN115290098B (zh) * 2022-09-30 2022-12-23 成都朴为科技有限公司 一种基于变步长的机器人定位方法和系统
CN115307646A (zh) * 2022-10-08 2022-11-08 浙江光珀智能科技有限公司 一种多传感器融合的机器人定位方法、系统及装置
CN115307646B (zh) * 2022-10-08 2023-03-24 浙江光珀智能科技有限公司 一种多传感器融合的机器人定位方法、系统及装置
CN115421125A (zh) * 2022-11-07 2022-12-02 山东富锐光学科技有限公司 一种基于数据融合的雷达点云数据惯性校正方法
CN116164747A (zh) * 2022-12-15 2023-05-26 广东智能无人系统研究院(南沙) 一种水下机器人的定位与导航方法及系统
CN116164747B (zh) * 2022-12-15 2023-09-05 广东智能无人系统研究院(南沙) 一种水下机器人的定位与导航方法及系统

Also Published As

Publication number Publication date
CN112764053B (zh) 2022-07-15
JP2023545279A (ja) 2023-10-27
CN112764053A (zh) 2021-05-07
EP4206736A1 (en) 2023-07-05
US20230400585A1 (en) 2023-12-14

Similar Documents

Publication Publication Date Title
WO2022142992A1 (zh) 融合定位方法、装置、设备和计算机可读存储介质
CN109345574B (zh) 基于语义点云配准的激光雷达三维建图方法
CN109521403B (zh) 多线激光雷达的参数标定方法及装置、设备及可读介质
WO2022156176A1 (zh) 多种雷达和相机联合标定方法、系统、设备及存储介质
WO2022142948A1 (zh) 动态目标跟踪定位方法、装置、设备和存储介质
US9242171B2 (en) Real-time camera tracking using depth maps
CN110927740A (zh) 一种移动机器人定位方法
CN112068154A (zh) 一种激光建图定位方法、装置、存储介质及电子设备
US10288425B2 (en) Generation of map data
CN111862215B (zh) 一种计算机设备定位方法、装置、计算机设备和存储介质
WO2022087916A1 (zh) 定位方法、装置、电子设备和存储介质
WO2022227489A1 (zh) 针对物体的碰撞检测方法、装置、设备和存储介质
Bosse et al. Histogram matching and global initialization for laser-only SLAM in large unstructured environments
CN108520543B (zh) 一种对相对精度地图进行优化的方法、设备及存储介质
CN115326051A (zh) 一种基于动态场景的定位方法、装置、机器人及介质
Ge et al. Visual features assisted robot localization in symmetrical environment using laser SLAM
US11353476B2 (en) Method and apparatus for determining velocity of obstacle, device and medium
CN112362059B (zh) 移动载体的定位方法、装置、计算机设备和介质
US10963733B2 (en) Associating spatial point sets with candidate correspondences
WO2024001083A1 (zh) 一种定位方法、装置、设备和存储介质
Charroud et al. Rapid localization and mapping method based on adaptive particle filters
CN110853098A (zh) 机器人定位方法、装置、设备及存储介质
Dong et al. Motion estimation of indoor robot based on image sequences and improved particle filter
Arnaud et al. On the fly plane detection and time consistency for indoor building wall recognition using a tablet equipped with a depth sensor
Vokhmintcev et al. Robot mapping algorithm based on Kalman filtering and symbolic tags

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21913733

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021913733

Country of ref document: EP

Effective date: 20230330

ENP Entry into the national phase

Ref document number: 2023521741

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE