CN117495968A - Mobile robot pose tracking method and device based on 3D laser radar - Google Patents

Mobile robot pose tracking method and device based on 3D laser radar Download PDF

Info

Publication number
CN117495968A
CN117495968A CN202410001849.4A CN202410001849A CN117495968A CN 117495968 A CN117495968 A CN 117495968A CN 202410001849 A CN202410001849 A CN 202410001849A CN 117495968 A CN117495968 A CN 117495968A
Authority
CN
China
Prior art keywords
pose
frame
current
key frame
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410001849.4A
Other languages
Chinese (zh)
Other versions
CN117495968B (en
Inventor
孙波
李道胜
李涛
宋海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Zhongde Ruibo Intelligent Technology Co ltd
Original Assignee
Suzhou Zhongde Ruibo Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Zhongde Ruibo Intelligent Technology Co ltd filed Critical Suzhou Zhongde Ruibo Intelligent Technology Co ltd
Priority to CN202410001849.4A priority Critical patent/CN117495968B/en
Publication of CN117495968A publication Critical patent/CN117495968A/en
Application granted granted Critical
Publication of CN117495968B publication Critical patent/CN117495968B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a mobile robot pose tracking method and device based on a 3D laser radar, wherein the method comprises the following steps: s01, performing traversal scanning on the environment by a mobile robot carrying the 3D laser radar to acquire laser point cloud data; performing inter-frame matching on each frame of laser point cloud data in the traversal process to obtain a relative spatial relationship between two adjacent frames of point cloud data, selecting key frames according to the relative spatial relationship, and storing each key frame and each pose; and S02, when the mobile robot needs to be tracked, selecting a current key frame in real time and initializing the pose, selecting a key frame close to the initialized pose from the stored key frames, splicing the key frames to form a local subgraph, and determining the pose of the robot relative to the pose of the local subgraph by utilizing the current key frame to realize pose tracking. The method has the advantages of simple implementation method, low calculation complexity, high tracking efficiency, high reliability, strong flexibility and the like.

Description

Mobile robot pose tracking method and device based on 3D laser radar
Technical Field
The invention relates to the technical field of mobile robot positioning, in particular to a mobile robot pose tracking method and device based on a 3D laser radar.
Background
The autonomous navigation function is an important evaluation index of the intelligent degree of the mobile robot, and the mobile robot with the autonomous navigation function primarily solves the problem of determining the pose of the robot relative to the surrounding environment, namely the positioning problem of the robot. The traditional robot positioning mode, such as using a global positioning system (Global Positioning System, GPS) for navigation, must rely on GPS signals, the positioning accuracy and reliability of the mobile robot in areas such as shade and shade, weak GPS signals and the like are difficult to ensure, and the robot positioning can be completely lost due to the fact that the GPS signals cannot be received in indoor, tunnel and other scenes. The positioning and mapping (simultaneous localization and mapping, SLAM) technology can solve the problems of positioning and mapping of robots, but most SLAM technologies used for autonomous navigation in the prior art are based on 2D grid maps, and the 2D grid maps describe three-dimensional space by using two-dimensional planes, so that obstacles in the environment cannot be accurately described, the pose of the robots relative to the map cannot be accurately reflected, and the actual positioning accuracy is not high.
Other robot positioning methods based on machine learning and the like can finish the positioning of a robot by utilizing semantic information by recognizing the semantic information in the environment through a deep learning method, but the machine learning method has very high requirements on the calculation power of a robot carrying computer, and the real-time performance of the method is poor, so that real-time robot pose information is difficult to provide.
In summary, the positioning method for the autonomous mobile navigation robot in the prior art is poor in reliability and instantaneity, lacks detailed description of a three-dimensional environment, and is difficult to provide stable and accurate pose information for the autonomous mobile navigation robot.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: aiming at the technical problems existing in the prior art, the invention provides the mobile robot pose tracking method and device based on the 3D laser radar, which have the advantages of simple implementation method, low calculation complexity, high tracking efficiency, high reliability and strong flexibility, and can realize the accurate real-time pose tracking of the light-weight mobile robot.
In order to solve the technical problems, the technical scheme provided by the invention is as follows:
a mobile robot pose tracking method based on a 3D laser radar comprises the following steps:
s01, map construction: performing traversing scanning on the environment by a mobile robot carrying a 3D laser radar, performing inter-frame matching on laser point cloud data of each frame obtained by scanning in the traversing process, sequentially obtaining relative spatial relations between two adjacent frames of point cloud data, selecting key frames according to the relative spatial relations, and storing each selected key frame and a pose corresponding to the key frame;
s02, tracking the pose: when the mobile robot is required to be tracked, in the running process of the robot, a key frame is selected from laser point cloud data obtained by laser radar scanning in real time to serve as a current key frame, pose initialization is carried out, key frames close to the pose of the current key frame are selected from the stored key frames after initialization, stitching is carried out to form a local subgraph, and the pose of the real-time robot is determined relative to the pose of the local subgraph by utilizing the current key frame, so that pose tracking of the robot is realized.
Further, in step S01, when a key frame is selected according to the relative spatial relationship, if the cumulative value of the translation distance or the rotation angle between the current frame and the previous key frame is greater than a preset spatial threshold, the current frame is selected as the key frame.
Further, step S02 includes:
s201, selecting one frame of laser point cloud data as a current key frame at intervals of a designated time;
s202, initializing the pose of the current key frame to obtain the pose of the initialized current key frame;
s203, carrying out inter-frame matching on the current continuous laser data frames to obtain a current space transformation relation;
step S204, searching a key frame close to the initialized current key frame pose from the stored key frames according to the current space transformation relation, and splicing the searched key frames to form a local sub-graph;
s205, matching the current key frame with the local subgraph to obtain the pose of the current frame relative to the local subgraph;
s206, applying the pose of the current frame relative to the local subgraph to the space coordinates of the local subgraph to obtain the pose of the current robot;
s207, circularly executing the steps S201 to S206, and completing the pose tracking of the robot.
Further, in step S202, if the current key frame laser point cloud data is the first frame data, the pose corresponding to the current key frame is initialized to the initial point pose of the historical track of the robot, and if the current key frame laser point cloud data is the non-first frame data, the pose corresponding to the current key frame is initialized to the coordinate in the historical track of the robotPose of the spatially nearest keyframe, where the coordinates +.>According to->Calculated out->For the key frame of the last moment->And corresponding local subgraph->Relative spatial relationship between->For partial subgraph->Is described.
Further, the steps ofIn step S204, each of the searched key frames is determined according to the following formulaSplicing to form local subgraph->
Wherein,is the firstiCurrent keyframe->Laser spot->In local subgraph->Coordinates of->For partial subgraph->Coordinates of->For key frames saved->Is defined by the coordinates of (a).
Further, in step S205, if the current key frameThe pose of the initialized current key frame obtained after pose initialization is in the key frame +.>Nearby keyframes->The corresponding spliced local subgraph is +.>Registering the current frame by ICP registration method>And local subgraph->Get the current keyframe->Relative to local subgraph->Spatial coordinate relation>The method comprises the steps of carrying out a first treatment on the surface of the In step S206, the current key frame +.>And local subgraph->Get the current keyframe->Relative to local subgraph->Spatial coordinate relation>Applied to local subgraph->Obtaining the current key frame by spatial coordinates of (2)>Is defined by the spatial coordinates of (a),current keyframe->Spatial coordinates of>Calculated according to the following formula:
wherein,for the current key frame->Is the spatial coordinates of the tracked robot pose, < +.>For the current key frame->And local subgraph->Relative spatial relationship of->For partial subgraph->Is defined in the drawing) is provided.
Further, in step S02, an inter-frame matching is performed on the scanned laser point cloud frame data by using a ICP (Iterative Closest Point) registration method to obtain a spatial relationship between two adjacent frames of point cloud data, including:
laser point cloud data for two frames in successionAnd->Wherein->And->The method is characterized in that the method comprises the steps of searching European transformation T for a laser point pair with the minimum corresponding distance in two frames of laser so that the two frames of laser point cloud data meet the following relation:
wherein R is a rotation matrix, T is a translation vector, and the transformation matrix T and the rotation matrix R and the translation vector T satisfy
Iterative computation using ICP registration method up to the sum of squares of errorsAnd (5) reaching the minimum value, and solving to obtain a transformation matrix T.
A mobile robot pose tracking device based on 3D lidar, comprising:
the map construction module is used for carrying out traversal scanning on the environment by a mobile robot carrying the 3D laser radar, carrying out inter-frame matching on each frame of laser point cloud data obtained by scanning in the traversal process, sequentially obtaining the relative spatial relationship between two adjacent frames of point cloud data, selecting key frames according to the relative spatial relationship, and storing each selected key frame and the pose corresponding to the key frame;
and the pose tracking module is used for selecting a keyframe from laser point cloud data obtained by laser radar scanning in real time as a current keyframe and initializing the pose when the mobile robot needs to be tracked, selecting a keyframe close to the pose of the current keyframe from the stored keyframes after initializing to splice the keyframes to form a local subgraph, and determining the pose of the real-time robot relative to the pose of the local subgraph by utilizing the current keyframe to realize the pose tracking of the robot.
Further, the pose tracking module includes:
the current frame selecting unit is used for selecting one frame of laser point cloud data as a current key frame at intervals of a specified time, and calculating a corresponding pose to be used as a tracked robot pose;
the pose initializing unit is used for initializing the pose of the current key frame to obtain the initialized pose of the current key frame;
the inter-frame matching unit is used for performing inter-frame matching on the current continuous laser data frames to obtain a current space transformation relation;
the local map generating unit is used for searching a key frame close to the initialized current key frame pose from the stored key frames according to the current space transformation relation, and splicing the searched key frames to form a local subgraph;
the frame and sub-image matching unit is used for matching the current key frame with the local sub-image to obtain the pose of the current frame relative to the local sub-image;
and the pose conversion unit is used for applying the pose of the current frame relative to the local subgraph to the space coordinate of the local subgraph to obtain the pose of the current robot, and transmitting the pose of the current robot to the pose initialization unit respectively for initializing the pose of the next frame and transmitting the pose to the pose output unit for outputting the real-time pose of the robot tracked at the current moment.
The mobile robot pose tracking device based on the 3D laser radar comprises the 3D laser radar, a processor and a memory, wherein the 3D laser radar is mounted on a mobile robot and is used for scanning the surrounding environment of the mobile robot to obtain laser point cloud data, the memory is used for storing a computer program, and the processor is used for executing the computer program to execute the method.
Compared with the prior art, the invention has the advantages that: according to the invention, the three-dimensional environment point cloud map is constructed by sensing the robot environment by utilizing the 3D laser radar, and meanwhile, key frames screened in the running process of the robot are reserved, when the robot needs to be subjected to pose tracking, a local subgraph is formed by searching the key frames close to the real-time point cloud pose in the running process of the robot, and the pose of the real-time robot is determined by utilizing the pose between the real-time laser point cloud and the local subgraph, so that when the robot is restarted to the vicinity of a historical running track, the pose tracking of the robot can be completed by fully utilizing the correspondence between the laser point cloud observed by the laser radar at the current moment and the reserved coordinates and key frames, the calculated amount and the realization cost in the tracking process can be greatly reduced, the lightweight real-time and accurate pose tracking of the mobile robot can be realized without depending on additional sensor equipment, and the robot can be flexibly applied to mobile robots with different calculation capacities.
Drawings
Fig. 1 is a schematic diagram of an implementation flow of a mobile robot pose tracking method based on a 3D lidar in this embodiment.
Fig. 2 is a detailed flowchart of the mobile robot pose tracking method according to the present embodiment.
Fig. 3 is a schematic diagram of the present embodiment for realizing pose tracking of a mobile robot.
Detailed Description
The invention is further described below in connection with the drawings and the specific preferred embodiments, but the scope of protection of the invention is not limited thereby.
According to the pose tracking method of the lightweight mobile robot, the surrounding environment of the robot is perceived by the 3D laser radar, the construction of a three-dimensional environment point cloud map is completed, meanwhile, key frames are selected according to relative spatial relations through three-dimensional laser point cloud data in real time to be stored, when the pose tracking of the robot is required, whether the key frames similar to the current laser point cloud pose exist or not is searched in the moving process of the robot, the searched key frames are spliced to form a local sub-map, the relative coordinate relations between the current laser point cloud and the local sub-map and the spatial coordinates of the local sub-map are utilized to determine the pose of the robot at the next moment, so that when the robot is driven to the vicinity of a historical travelling track again, the stored key frames can be fully utilized to splice the local sub-map, the pose of the relative environment point cloud map of the robot is further quickly determined through the local sub-map, and the real-time tracking of the relative map pose of the robot is realized, and therefore accurate real-time pose information is provided for path planning and autonomous navigation of the mobile robot.
As shown in fig. 1 to 3, the method for tracking the pose of the mobile robot based on the 3D laser radar in this embodiment includes the following steps:
s01, map construction: and traversing the environment by a mobile robot carrying the 3D laser radar, carrying out inter-frame matching on each frame of laser point cloud data obtained by scanning in the traversing process, sequentially obtaining the relative spatial relationship between two adjacent frames of point cloud data, selecting key frames according to the relative spatial relationship, and storing each selected key frame and the pose corresponding to the key frame.
The method comprises the steps of carrying a 3D laser radar on a mobile robot in advance, traversing and scanning the environment by the mobile robot for one circle to realize the perception of the surrounding environment in a target area to be tracked, completing the construction of a three-dimensional environment point cloud map, and acquiring 3D laser point cloud data obtained by scanning in the scanning process. And in the traversing scanning process, carrying out inter-frame matching on each frame of laser point cloud data obtained by scanning, and sequentially obtaining the relative spatial relationship between two adjacent frames of point cloud data. And registering the two frames of laser point clouds by using an ICP method to obtain a relative spatial relationship between the two frames of laser point clouds, namely estimating the motion state of the robot between two moments, for the laser point cloud data frames obtained by scanning the 3D laser radar.
Specifically, assume two frames of consecutive laser point cloud dataAnd->Wherein->And->The method is characterized in that the method comprises the steps of searching European transformation T for a laser point pair with the minimum corresponding distance in two frames of laser so that the two frames of laser point cloud data meet the following relation:
(1)
wherein, R is a rotation matrix, T is a translation vector, and the transformation matrix T and the rotation matrix R and the translation vector T satisfy the following conditions:
(2)
performing iterative computation by using an ICP registration method until the error square sum calculated according to the following formula reaches a minimum value, and solving to obtain a transformation matrix T;
(3)
and selecting a key frame according to the relative spatial relation between every two adjacent frames of point cloud data, and storing the point clouds of each selected key frame and the pose corresponding to the key frame. And (3) for laser point cloud data obtained by scanning the 3D laser radar, obtaining a relative spatial relationship between two frames of laser point clouds, namely a relative motion relationship after the two frames of laser point clouds are registered, and further selecting a specific laser frame key frame according to the relative spatial relationship between every two adjacent frames of laser point cloud data.
Specifically, if the cumulative value of the translation distance or the rotation angle between the current frame and the previous key frame is greater than a preset spatial threshold, the current frame is selected as the key frame. It can be understood that, other spatial relation parameters besides translation distance and rotation angle can be adopted to select each key frame according to actual requirements.
And for the selected key frames, the laser point cloud data and the pose of each key frame are reserved, and the common laser frames of the non-key frames only participate in motion estimation without reserving the laser point cloud, so that the key frames and the poses of the key frames are saved for subsequent pose tracking. The intermittent discrete coordinate points on the moving track of the robot and the surrounding environment laser point clouds scanned by the laser radar on each coordinate point are obtained, by reserving the discrete coordinates and the key frames observed by each coordinate, when the robot moves nearby the moving track before traversing again, the pose tracking of the robot can be completed by utilizing the corresponding relation between the laser point clouds observed by the laser radar at the current moment and the reserved coordinates and key frames, the calculated amount in the tracking process can be greatly reduced, the data storage amount is reduced, and the tracking efficiency is improved.
And S02, when the mobile robot needs to be tracked, selecting a key frame from laser point cloud data obtained by laser radar scanning in real time as a current key frame and initializing the pose, selecting a key frame close to the pose of the current key frame from the stored key frames after initializing, splicing the key frames to form a local sub-image, and determining the pose of the real-time robot relative to the pose of the local sub-image by utilizing the current key frame to realize pose tracking of the robot.
And S201, selecting one frame of laser point cloud data as a current key frame at intervals of a designated time.
The continuous output of pose information of a robot is an unnecessary waste of computer computing power, and in the embodiment, when the pose of the robot needs to be tracked, a frame of laser point cloud is selected as a current laser frame at intervals of fixed time intervals, the pose of the laser frame relative to the robot environment is calculated, the pose of the laser frame is output as the tracked pose of the robot, so that the tracking precision can be ensured, and meanwhile, the required calculation amount is further reduced.
And S202, initializing the pose of the current frame to obtain the pose of the initialized current key frame.
After the current key frame is selected, the pose of the current key frame needs to be initialized. Specifically, if the laser point cloud data of the current key frame is the first frame data, initializing the pose corresponding to the current key frame as the initial point pose of the historical track of the robot, if the laser point cloud data of the current key frame is the first frame dataThe key frame laser point cloud data is non-first frame data, and the pose corresponding to the current key frame is initialized to be the coordinate in the historical track of the robotPose of the spatially nearest keyframe. The pose initialization is divided into two working states, wherein the first state corresponds to initializing the pose of the current frame to be the initial point pose of the historical track of the robot if the current key frame is the first frame data, and the second state corresponds to initializing the pose of the current frame to be the pose of a certain key frame of the historical track of the robot if the current key frame is not the first frame data, namely the pose of a certain saved key frame.
In a specific application embodiment, the coordinatesThe method is specifically calculated according to the following formula:
(4)
wherein,for the last moment laser point cloud keyframe +.>And corresponding local subgraph->Relative spatial relationship between->Is a local map->Is described.
Utilizing last-time laser-point cloud key framesAnd corresponding local map->Relative spatial relation between and local map +.>Coordinates are calculated for (2)>After that, by looking up the coordinates in the database holding key frames +.>And assigning the searched key frame pose to the current laser frame to finish the pose initialization of the current laser point cloud key frame.
And S203, carrying out inter-frame matching on the current continuous laser data frames to obtain a current space transformation relation.
Specifically, motion state estimation between continuous laser frames is completed through an ICP registration method until a next frame is selected according to a time interval, and spatial transformation relation between a current frame after pose initialization and the selected next laser frame is completed through inter-frame matching.
Step S204, searching the key frames close to the pose of the initialized current key frame (the space distance is within a preset threshold value) from the stored key frames according to the current space transformation relation, and splicing the searched key frames to form a local sub-image (local map).
And splicing the saved part of key frames by using the saved key frame pose to form a sub-graph. Assume that the current laser frame is subjected to pose initialization by a pose initialization module and then subjected to pose initialization and key framesPose of->By splicing key framesThe spatial distance satisfies a thresholdConstructing a local subgraph of several key frames of values +.>
Specifically, a local subgraph is setIs keyframe->Corresponding pose->Each key frame is processed according to the followingSplicing to form local subgraph->
(5)
Wherein,is the firstiLaser data frame->Laser spot->In local subgraph->Coordinate expression of->For partial subgraph->Coordinates of->For saved laser data frame->Is defined by the coordinates of (a).
And S205, matching the current key frame with the local subgraph to obtain the pose of the current key frame relative to the local subgraph.
In particular, if the current frameThe pose of the initialized current key frame obtained after pose initialization is a key frame in a stored robot history track>Nearby keyframes->The corresponding spliced local subgraph (local map) isThe present embodiment registers the current frame +_ by ICP registration method>And local subgraph->Get the current frame +.>Relative to local subgraphSpatial coordinate relation>And outputs the spatial coordinate relationship +.>For performing pose conversion.
S206, applying the pose of the current frame relative to the local subgraph to the space coordinates of the local subgraph to obtain the pose of the current robot, and completing the pose tracking of the robot.
Specifically, the current frame isRelative to local subgraph->Spatial coordinate relation>Applied to local subgraph->Spatial coordinates of>And obtaining the pose of the current robot and completing the pose tracking of the robot.
In a specific application embodiment, the current keyframeSpatial coordinates of>Calculated according to the following formula:
(6)
wherein,for the current key frame->Is the spatial coordinates of the tracked robot pose, < +.>For the current key frame->And local subgraph->Relative spatial relationship of->For partial subgraph->Is defined in the drawing) is provided.
The obtained current key frameSpatial coordinates of>Two paths of output are divided, one path is used for initializing the pose of the next frame, and the other path is used for initializing the pose of the current frame +.>Spatial coordinates of>And publishing in real time as the real-time pose of the tracked robot.
S207, circularly executing the steps S201 to S206, and completing the pose tracking of the robot.
According to the embodiment, the surrounding environment of the robot is perceived by utilizing the 3D laser radar, a plurality of key frames are processed and stored by utilizing the 3D laser point cloud data in the traversing scanning process, map modeling of the three-dimensional environment where the robot is located is completed, the pose of the robot relative to the environment point cloud map is determined by utilizing the stored key frames in the real-time tracking process of the robot, the real-time positioning of the robot is completed by tracking the pose of the robot, accurate positioning information is provided for realizing autonomous navigation of the robot, the environment modeling and the pose tracking of the robot can be completed by utilizing one 3D laser radar, an additional robot sensor is not needed to be fused, the calculated amount can be greatly reduced, light pose tracking is realized, the robot can be conveniently deployed on mobile robot platforms with different calculation forces, and the positioning requirements of various mobile robots in different working scenes are met.
As shown in fig. 2, the mobile robot pose tracking device based on the 3D lidar of the present embodiment includes:
the map construction module is used for carrying out traversal scanning on the environment by a mobile robot carrying the 3D laser radar, carrying out inter-frame matching on laser point cloud data of each frame obtained by scanning in the traversal process, sequentially obtaining the relative spatial relationship between two adjacent frames of point cloud data, selecting key frames according to the relative spatial relationship, and storing each selected key frame and the pose corresponding to the key frame;
and the pose tracking module is used for selecting a keyframe from laser point cloud data obtained by laser radar scanning in real time as a current keyframe and initializing the pose when the mobile robot needs to be tracked, selecting a keyframe close to the pose of the current keyframe from the stored keyframes after initializing to splice the keyframes to form a local subgraph, and determining the pose of the real-time robot relative to the pose of the local subgraph by utilizing the current keyframe to realize the pose tracking of the robot.
In this embodiment, the pose tracking module specifically includes:
the current frame selecting unit is used for selecting one frame of laser point cloud data as a current key frame at intervals of a specified time, and calculating a corresponding pose to be used as a tracked robot pose;
the pose initializing unit is used for initializing the pose of the current key frame to obtain the initialized pose of the current key frame;
the inter-frame matching unit is used for performing inter-frame matching on the current continuous laser data frames to obtain a current space transformation relation;
the local map generating unit is used for searching a key frame close to the initialized current key frame pose from the stored key frames according to the current space transformation relation, and splicing the searched key frames to form a local subgraph;
the frame and sub-image matching unit is used for matching the current key frame with the local sub-image to obtain the pose of the current frame relative to the local sub-image;
and the pose conversion unit is used for applying the pose of the current frame relative to the local subgraph to the space coordinate of the local subgraph to obtain the pose of the current robot, and transmitting the pose of the current robot to the pose initialization unit respectively for initializing the pose of the next frame and transmitting the pose to the pose output unit for outputting the real-time pose of the robot tracked at the current moment.
In a specific application embodiment, as shown in fig. 2 and 3, a mobile robot carrying a 3D laser radar traverses the environment for one week, during this period, a first inter-frame matching unit registers continuous laser point clouds by adopting an ICP method to obtain a relative spatial relationship between adjacent laser frames, a first keyframe selecting unit selects a part of the laser frames as keyframes according to the relative spatial relationship, and stores the laser point clouds of each keyframe, and outputs the laser point clouds to an odometer module to store the spatial coordinates of each keyframe in a transformation matrix format.
After the robot traverses the environment for one week, the first key frame selecting unit and the odometer module respectively store the point cloud and the space coordinates of each key frame in a local hard disk of the computer. When the pose of the robot needs to be tracked in real time, the robot is started to run in the traversed scene, and then the robot reads and traverses the key frame point cloud stored in the scene and the space coordinates corresponding to the key frames from the local hard disk of the computer. The method comprises the steps that a second key frame selection unit in a pose tracking module selects a certain frame of continuous laser frames as a current laser frame according to a time threshold, a second inter-frame matching unit obtains the relative spatial relation between the current continuous laser frames through an inter-frame matching method, a local map generation unit reads laser frames adjacent to the spatial coordinates of the current laser frames from a local hard disk of a computer, a plurality of adjacent frames are spliced into a local sub-image, the spatial coordinates of the current key frames relative to a local map are obtained through registering the current key frames and the local sub-image through a frame and sub-image matching unit, finally the spatial relation of the current key frames relative to the local map is converted into the spatial coordinates of the current frame through a pose conversion unit, the spatial coordinates are transmitted to a pose initialization unit for initializing the initial pose of the next laser frame respectively, and the initial pose is output to a pose output module to serve as the tracked real-time pose of the robot.
In a specific application embodiment, the mobile robot pose tracking device comprises the following detailed steps:
step 1: the input module senses the surrounding environment of the robot through the 3D laser radar carried by the robot and transmits the scanned laser point cloud frame to the registration module.
Step 2: in the process of robot scanning, registering two frames of laser point clouds by a registration module through an ICP method to obtain a relative spatial relationship between the two frames of laser point clouds, namely estimating the motion state of the robot between two moments, wherein for two continuous frames of laser point cloudsAnd->Wherein->And->The laser point pair with the minimum corresponding distance in the two frames of laser is formed by searching European transformation T, so that two frames of laser point clouds overlap as much as possible, namely, the two frames of point clouds meet the relation: />,/>And then, the sampling ICP method obtains a transformation matrix T through solving after the error square sum reaches the minimum value through iterative calculation, and the relative motion relation between the two frames of laser point clouds is obtained.
Step 3: in the robot scanning process, a key frame selection module selects a specific laser frame as a key frame according to a space transformation relation, wherein if the accumulated value of the translation distance or the rotation angle between the current frame and the last key frame is larger than a set space threshold value, the current frame is selected as the key frame, the key frame selection module reserves the laser point cloud of each key frame, the common laser frame only participates in motion estimation without reserving the laser points, and the pose of the key frame is transmitted to an odometer module. The odometer module receives the poses of the individual key frames transmitted by the key frame module and stores them in a computer local text document in the form of a transformation matrix.
Step 4: the pose tracking module is started, a specific laser frame is selected as a current laser frame by the current frame selection unit according to a time relation, a screenshot is obtained, a frame of laser point cloud is selected as the current laser frame at fixed time intervals, the pose of the laser frame relative to the robot environment is calculated, and the pose of the laser frame is output as the tracked pose of the robot.
Step 5: and initializing the current frame by a pose initializing unit in two working states, initializing the pose of the current frame to be the initial point pose of the historical track of the robot if the current laser frame is the first frame laser, and initializing the pose of the current laser frame to be a certain key frame pose of the historical track of the robot if the current laser frame is not the first laser frame by the pose initializing module. The pose initialization module specifically searches the local text document of the computer for the coordinateAnd the pose of the key frame of the historical track of the robot closest in space distance is assigned to the current laser frame to finish the pose initialization of the current laser frame.
Step 6: the inter-frame matching unit completes motion state estimation between continuous laser frames through an ICP method until the current frame selection unit selects the next laser frame according to the time interval, and the inter-frame matching unit calculates the spatial transformation relation between the current laser frame after pose initialization and the next laser frame selected by the current frame selection unit.
Step 7: the local map generating unit uses the key frame pose saved in the computer local text document to splice part of the key frames saved by the key frame selecting module to form a sub-graph, wherein the pose and the key frames after the pose initialization of the current laser frame by the pose initializing module are assumedPose of->The same, the local map generation unit is created by concatenating the key frames +.>Constructing a local map with a number of key frames whose spatial distance meets a threshold>For splicing local maps->Other key frames->The laser points in (1) satisfy the corresponding relation: />Wherein->For laser frame->Laser spot->In local map->Coordinate expression of->Is a local map->Coordinates of->Laser frame saved for odometer module +.>Is defined by the coordinates of (a).
Step 8: the frame and sub-image matching unit registers the current frame and the local map by ICP method to obtain the pose of the current frame relative to the local map, if the current frameAfter pose initialization, a certain key frame in the historical track of the robotNearby, keyframes spliced by local map module +.>The corresponding local map is +.>The sub-graph matching module registers the current frame by ICP method>And local map->Get the current frame +.>Relative to local map->Spatial coordinate relation>And spatial coordinate relation +.>And transmitting the data to the pose conversion unit.
Step 9: the pose conversion unit matches the frame with the current frame obtained by the sub-image matching unitRelative to each otherOn local mapSpatial coordinate relation>Applied to local map->Spatial coordinates of>And obtaining the pose of the current robot and completing the pose tracking of the robot. The current frame obtained by the pose conversion unit +.>Spatial coordinates of>The real-time pose tracking module is used for respectively transmitting the real-time pose tracking signals to the pose initializing module and the pose outputting module which are used for carrying out pose initialization and pose outputting on the robot. The pose output module receives the current frame +.>Spatial coordinates of>And issuing the coordinates in real time as the real-time pose of the tracked robot to complete the real-time tracking of the pose of the robot.
According to the embodiment, the real-time tracking of the robot pose can be completed by only using one 3D laser radar, real-time accurate positioning and environment map information are provided for the mobile robot needing autonomous navigation, the required sensors are few, the cost is low, the calculation is simple, the calculation amount is small, the light tracking calculation can be realized, the computer calculation force requirement on the robot carrying is low, the robot can be conveniently deployed on different mobile robot platforms, even if the robot is applied to the mobile robot platforms with low calculation force, the real-time robot pose tracking can be realized, and meanwhile, the automatic navigation is convenient to upgrade and optimize, so that the robot is suitable for the autonomous navigation of the mobile robot in various different scenes to realize the real-time positioning.
The embodiment also provides a mobile robot pose tracking device based on the 3D laser radar, which comprises the 3D laser radar, a processor and a memory, wherein the mobile robot pose tracking device is mounted on the mobile robot and used for scanning the surrounding environment of the mobile robot to obtain laser point cloud data, and the memory is used for storing a computer program, and the processor is used for executing the computer program to execute the method.
It will be understood that the method in this embodiment may be performed by a single device, for example, a computer or a server, or may be implemented by a plurality of devices in a distributed scenario, where one device of the plurality of devices may perform only one or more steps in the method in this embodiment, and the plurality of devices interact to implement the method. The processor may be implemented as a general-purpose CPU, a microprocessor, an application-specific integrated circuit, or one or more integrated circuits, etc. for executing the relevant program to implement the methods described in this embodiment. The memory may be implemented in the form of read-only memory ROM, random access memory RAM, static storage devices, dynamic storage devices, etc. The memory may store an operating system and other application programs, and when the methods of the present embodiments are implemented in software or firmware, the associated program code is stored in the memory and invoked for execution by the processor.
The foregoing is merely a preferred embodiment of the present invention and is not intended to limit the present invention in any way. While the invention has been described with reference to preferred embodiments, it is not intended to be limiting. Therefore, any simple modification, equivalent variation and modification of the above embodiments according to the technical substance of the present invention shall fall within the scope of the technical solution of the present invention.

Claims (10)

1. The mobile robot pose tracking method based on the 3D laser radar is characterized by comprising the following steps:
s01, map construction: performing traversing scanning on the environment by a mobile robot carrying a 3D laser radar, performing inter-frame matching on laser point cloud data of each frame obtained by scanning in the traversing process, sequentially obtaining relative spatial relations between two adjacent frames of point cloud data, selecting key frames according to the relative spatial relations, and storing each selected key frame and a pose corresponding to the key frame;
s02, tracking the pose: when the mobile robot is required to be tracked, in the running process of the robot, a key frame is selected from laser point cloud data obtained by laser radar scanning in real time to serve as a current key frame, pose initialization is carried out, key frames close to the pose of the current key frame are selected from the stored key frames after initialization, stitching is carried out to form a local subgraph, and the pose of the real-time robot is determined relative to the pose of the local subgraph by utilizing the current key frame, so that pose tracking of the robot is realized.
2. The method for tracking the pose of a mobile robot based on a 3D lidar according to claim 1, wherein when a key frame is selected according to the relative spatial relationship in step S01, if the cumulative value of the translational distance or the rotational angle between the current frame and the previous key frame is greater than a preset spatial threshold, the current frame is selected as the key frame.
3. The mobile robot pose tracking method based on 3D lidar of claim 1, wherein step S02 comprises:
s201, selecting one frame of laser point cloud data as a current key frame at intervals of a designated time;
s202, initializing the pose of the current key frame to obtain the pose of the initialized current key frame;
s203, carrying out inter-frame matching on the current continuous laser data frames to obtain a current space transformation relation;
step S204, searching a key frame close to the initialized current key frame pose from the stored key frames according to the current space transformation relation, and splicing the searched key frames to form a local sub-graph;
s205, matching the current key frame with the local subgraph to obtain the pose of the current frame relative to the local subgraph;
s206, applying the pose of the current frame relative to the local subgraph to the space coordinates of the local subgraph to obtain the pose of the current robot;
s207, circularly executing the steps S201 to S206, and completing the pose tracking of the robot.
4. The method for tracking pose of mobile robot based on 3D lidar of claim 3, wherein in step S202, if the current key frame laser point cloud data is the first frame data, the pose corresponding to the current key frame is initialized to the initial point pose of the historical track of robot, and if the current key frame laser point cloud data is the non-first frame data, the pose corresponding to the current key frame is initialized to the coordinate in the historical track of robotPose of the spatially nearest keyframe, where the coordinates +.>According to->Calculated out->For the key frame of the last moment->And corresponding local subgraphRelative spatial relationship between->Is local toSubgraph->Is described.
5. The method for tracking the pose of a mobile robot based on 3D lidar according to claim 3, wherein in step S204, each of the searched key frames is determined according to the following formulaSplicing to form local subgraph->
Wherein,is the firstiCurrent keyframe->Laser spot->In local subgraph->Coordinates of->For partial subgraph->Coordinates of->For key frames saved->Is defined by the coordinates of (a).
6. The method according to claim 3, wherein in step S205, if the current key frame isThe pose of the initialized current key frame obtained after pose initialization is in the key frame +.>Nearby keyframes->The corresponding spliced local subgraph is +.>Registering the current frame by ICP registration method>And local subgraph->Get the current keyframe->Relative to local subgraph->Spatial coordinate relation>The method comprises the steps of carrying out a first treatment on the surface of the In step S206, the current key frame +.>And local subgraph->Get the current keyframe->Relative to local subgraph->Spatial coordinate relation>Applied to local subgraph->Obtaining the current key frame by spatial coordinates of (2)>Is the current key frame +.>Spatial coordinates of>Calculated according to the following formula:
wherein,for the current key frame->Is the spatial coordinates of the tracked robot pose, < +.>For the current key frame->And local subgraph->Relative spatial relationship of->For partial subgraph->Is defined in the drawing) is provided.
7. The mobile robot pose tracking method based on 3D lidar according to any one of claims 1 to 6, wherein in step S02, an ICP registration method is adopted to perform inter-frame matching on laser point cloud frame data obtained by scanning, so as to obtain a spatial relationship between two adjacent frames of point cloud data, and the method comprises:
laser point cloud data for two frames in successionAnd->Wherein->And->The method is characterized in that the method comprises the steps of searching European transformation T for a laser point pair with the minimum corresponding distance in two frames of laser so that the two frames of laser point cloud data meet the following relation:
wherein R is a rotation matrix, T is a translation vector, and the transformation matrix T and the rotation matrix R and the translation vector T satisfy
Iterative computation using ICP registration method up to the sum of squares of errorsAnd (5) reaching the minimum value, and solving to obtain a transformation matrix T.
8. Mobile robot pose tracking device based on 3D laser radar, characterized by comprising:
the map construction module is used for carrying out traversal scanning on the environment by a mobile robot carrying the 3D laser radar, carrying out inter-frame matching on each frame of laser point cloud data obtained by scanning in the traversal process, sequentially obtaining the relative spatial relationship between two adjacent frames of point cloud data, selecting key frames according to the relative spatial relationship, and storing each selected key frame and the pose corresponding to the key frame;
and the pose tracking module is used for selecting a keyframe from laser point cloud data obtained by laser radar scanning in real time as a current keyframe and initializing the pose when the mobile robot needs to be tracked, selecting a keyframe close to the pose of the current keyframe from the stored keyframes after initializing to splice the keyframes to form a local subgraph, and determining the pose of the real-time robot relative to the pose of the local subgraph by utilizing the current keyframe to realize the pose tracking of the robot.
9. The mobile robot pose tracking device based on 3D lidar of claim 8, wherein the pose tracking module comprises:
the current frame selecting unit is used for selecting one frame of laser point cloud data as a current key frame at intervals of a specified time, and calculating a corresponding pose to be used as a tracked robot pose;
the pose initializing unit is used for initializing the pose of the current key frame to obtain the initialized pose of the current key frame;
the inter-frame matching unit is used for performing inter-frame matching on the current continuous laser data frames to obtain a current space transformation relation;
the local map generating unit is used for searching a key frame close to the initialized current key frame pose from the stored key frames according to the current space transformation relation, and splicing the searched key frames to form a local subgraph;
the frame and sub-image matching unit is used for matching the current key frame with the local sub-image to obtain the pose of the current frame relative to the local sub-image;
and the pose conversion unit is used for applying the pose of the current frame relative to the local subgraph to the space coordinate of the local subgraph to obtain the pose of the current robot, and transmitting the pose of the current robot to the pose initialization unit respectively for initializing the pose of the next frame and transmitting the pose to the pose output unit for outputting the real-time pose of the robot tracked at the current moment.
10. A mobile robot pose tracking device based on a 3D laser radar, comprising the 3D laser radar, and being mounted on a mobile robot for scanning the surrounding environment of the mobile robot to obtain laser point cloud data, and further comprising a processor and a memory, wherein the memory is used for storing a computer program, and the processor is used for executing the computer program to execute the method according to any one of claims 1 to 7.
CN202410001849.4A 2024-01-02 2024-01-02 Mobile robot pose tracking method and device based on 3D laser radar Active CN117495968B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410001849.4A CN117495968B (en) 2024-01-02 2024-01-02 Mobile robot pose tracking method and device based on 3D laser radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410001849.4A CN117495968B (en) 2024-01-02 2024-01-02 Mobile robot pose tracking method and device based on 3D laser radar

Publications (2)

Publication Number Publication Date
CN117495968A true CN117495968A (en) 2024-02-02
CN117495968B CN117495968B (en) 2024-05-17

Family

ID=89671268

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410001849.4A Active CN117495968B (en) 2024-01-02 2024-01-02 Mobile robot pose tracking method and device based on 3D laser radar

Country Status (1)

Country Link
CN (1) CN117495968B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112965063A (en) * 2021-02-11 2021-06-15 深圳市安泽智能机器人有限公司 Robot mapping and positioning method
CN113066105A (en) * 2021-04-02 2021-07-02 北京理工大学 Positioning and mapping method and system based on fusion of laser radar and inertial measurement unit
CN113674399A (en) * 2021-08-16 2021-11-19 杭州图灵视频科技有限公司 Mobile robot indoor three-dimensional point cloud map construction method and system
CN114236552A (en) * 2021-11-12 2022-03-25 苏州玖物互通智能科技有限公司 Repositioning method and system based on laser radar
CN115265523A (en) * 2022-09-27 2022-11-01 泉州装备制造研究所 Robot simultaneous positioning and mapping method, device and readable medium
CN116105721A (en) * 2023-04-11 2023-05-12 深圳市其域创新科技有限公司 Loop optimization method, device and equipment for map construction and storage medium
CN116299500A (en) * 2022-12-14 2023-06-23 江苏集萃清联智控科技有限公司 Laser SLAM positioning method and device integrating target detection and tracking
CN116295412A (en) * 2023-03-01 2023-06-23 南京航空航天大学 Depth camera-based indoor mobile robot dense map building and autonomous navigation integrated method
CN116934851A (en) * 2022-04-06 2023-10-24 广州视源电子科技股份有限公司 Robot positioning method, device, equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112965063A (en) * 2021-02-11 2021-06-15 深圳市安泽智能机器人有限公司 Robot mapping and positioning method
CN113066105A (en) * 2021-04-02 2021-07-02 北京理工大学 Positioning and mapping method and system based on fusion of laser radar and inertial measurement unit
CN113674399A (en) * 2021-08-16 2021-11-19 杭州图灵视频科技有限公司 Mobile robot indoor three-dimensional point cloud map construction method and system
CN114236552A (en) * 2021-11-12 2022-03-25 苏州玖物互通智能科技有限公司 Repositioning method and system based on laser radar
CN116934851A (en) * 2022-04-06 2023-10-24 广州视源电子科技股份有限公司 Robot positioning method, device, equipment and storage medium
CN115265523A (en) * 2022-09-27 2022-11-01 泉州装备制造研究所 Robot simultaneous positioning and mapping method, device and readable medium
CN116299500A (en) * 2022-12-14 2023-06-23 江苏集萃清联智控科技有限公司 Laser SLAM positioning method and device integrating target detection and tracking
CN116295412A (en) * 2023-03-01 2023-06-23 南京航空航天大学 Depth camera-based indoor mobile robot dense map building and autonomous navigation integrated method
CN116105721A (en) * 2023-04-11 2023-05-12 深圳市其域创新科技有限公司 Loop optimization method, device and equipment for map construction and storage medium

Also Published As

Publication number Publication date
CN117495968B (en) 2024-05-17

Similar Documents

Publication Publication Date Title
Shan et al. Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping
CN109214248B (en) Method and device for identifying laser point cloud data of unmanned vehicle
CN107167826B (en) Vehicle longitudinal positioning system and method based on variable grid image feature detection in automatic driving
CN110873883B (en) Positioning method, medium, terminal and device integrating laser radar and IMU
CN110675307A (en) Implementation method of 3D sparse point cloud to 2D grid map based on VSLAM
CN111736586B (en) Method for automatically driving vehicle position for path planning and device thereof
CN116255992A (en) Method and device for simultaneously positioning and mapping
CN111784835A (en) Drawing method, drawing device, electronic equipment and readable storage medium
CN115267796B (en) Positioning method, positioning device, robot and storage medium
CN112068152A (en) Method and system for simultaneous 2D localization and 2D map creation using a 3D scanner
CN115728803A (en) System and method for continuously positioning urban driving vehicle
CN116608847A (en) Positioning and mapping method based on area array laser sensor and image sensor
CN113761647B (en) Simulation method and system of unmanned cluster system
CN112652062A (en) Point cloud map construction method, device, equipment and storage medium
CN113375657A (en) Electronic map updating method and device and electronic equipment
CN112767545A (en) Point cloud map construction method, device, equipment and computer storage medium
CN115239899B (en) Pose map generation method, high-precision map generation method and device
CN117495968B (en) Mobile robot pose tracking method and device based on 3D laser radar
CN116520302A (en) Positioning method applied to automatic driving system and method for constructing three-dimensional map
CN115494533A (en) Vehicle positioning method, device, storage medium and positioning system
CN115619954A (en) Sparse semantic map construction method, device, equipment and storage medium
CN114577216A (en) Navigation map construction method and device, robot and storage medium
KR20220078519A (en) Apparatus and method of estimating vehicle location for autonomous driving
Burnett et al. Continuous-Time Radar-Inertial and Lidar-Inertial Odometry using a Gaussian Process Motion Prior
CN114677284A (en) Map construction method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant