CN113358112B - Map construction method and laser inertia odometer - Google Patents

Map construction method and laser inertia odometer Download PDF

Info

Publication number
CN113358112B
CN113358112B CN202110618535.5A CN202110618535A CN113358112B CN 113358112 B CN113358112 B CN 113358112B CN 202110618535 A CN202110618535 A CN 202110618535A CN 113358112 B CN113358112 B CN 113358112B
Authority
CN
China
Prior art keywords
information
map
laser
estimated
dynamic object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110618535.5A
Other languages
Chinese (zh)
Other versions
CN113358112A (en
Inventor
赖豪文
姜晓旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Chaoxing Future Technology Co ltd
Original Assignee
Beijing Chaoxing Future Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Chaoxing Future Technology Co ltd filed Critical Beijing Chaoxing Future Technology Co ltd
Priority to CN202110618535.5A priority Critical patent/CN113358112B/en
Publication of CN113358112A publication Critical patent/CN113358112A/en
Application granted granted Critical
Publication of CN113358112B publication Critical patent/CN113358112B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C22/00Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/50Systems of measurement based on relative movement of target
    • G01S17/58Velocity or trajectory determination systems; Sense-of-movement determination systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

The invention belongs to the field of automatic driving, and particularly relates to a map construction method and a laser inertia odometer, wherein the map construction method comprises a sensor information acquisition step, a dynamic object removal step, an odometer data acquisition step, an information comparison step, a loop detection step, a nonlinear optimization step and a map drawing step; the laser inertia odometer comprises a dot matrix scanning device and an inertia measuring device which are used for data interaction; the inertial measurement unit and the scanning probe of the dot matrix scanning unit are fixed in space. The map construction method and the laser inertia odometer have strong adaptability and robustness to the environment, and the constructed map has high precision.

Description

Map construction method and laser inertia odometer
Technical Field
The invention belongs to the field of automatic driving, and particularly relates to a map construction method and a laser inertia odometer.
Background
The automatic driving technology is a hot topic in recent years, and in the fields of relieving traffic jam, improving road safety, reducing air pollution and the like, automatic driving brings subversive change. Since the automatic driving faces the often uncontrollable environment, one of the key difficulties is unpredictability of the environment, and how to accurately, quickly and effectively acquire and master the environment information and accurately position the environment information is a prerequisite and important guarantee for normal operation of other modules.
In an outdoor scene, absolute positioning can be performed by means of a Global Navigation Satellite System (GNSS), but a signal of the GNSS is often blocked by a high-rise building, a bridge, a tunnel and the like, so that positioning accuracy is unstable, and even position information is lost.
In order to ensure the positioning stability, one feasible method is to create a map of the environment in advance and then determine the position and attitude in the existing map in real time according to the sensor data. The method can effectively avoid the problem of satellite signal shielding, can stably work under various environments, and is increasingly applied to automatic driving. At this time, the accuracy and quality of the pre-established environment map are very important, which directly affects the accuracy of subsequent positioning.
In order to construct a point cloud map suitable for Localization, a method of Simultaneous Localization and Mapping (SLAM) is often adopted. SLAM is a problem of tight coupling of positioning and mapping, and a classical SLAM framework can be mainly divided into sensor information acquisition, front-end (odometer), back-end (nonlinear optimization), loop detection, mapping and the like.
For the front end of the mapping, the visual odometer is affected by illumination change, weather, motion blur and the like outdoors, and the accuracy and the robustness are not high. The laser odometer is resistant to environmental interference to a certain extent, however, in the prior art, adjacent laser frames are often directly matched, or an IMU is introduced to provide a matching initial value, so that a loosely coupled laser inertia odometer is formed, and the precision is relatively low. In addition, the IMU tends to have a certain zero offset, and direct use without estimated calibration also introduces errors.
For the back end of the graph construction, the optimization method based on the filter has the defects of too fast memory increase, poor global optimization performance and the like. The method based on deterministic graph optimization, such as BA and pose graphs, optimizes the whole graph every time a new node is added, consumes a large amount of time when the number of nodes and edges is large, and has many repeated calculations and low efficiency.
For the existing complete mapping technical schemes, the existing complete mapping technical schemes are often designed aiming at a determined sensor form, are difficult to expand, and are not fused with multiple sensors to improve the mapping precision. In addition, the schemes do not consider dynamic obstacles in the environment, such as vehicles, pedestrians and the like, and the direct application of the schemes to map building not only can introduce front-end odometer matching errors, but also can cause smear in the map and influence the quality of the map.
Disclosure of Invention
In view of the above, the invention provides a high-precision high-quality point cloud map construction method, which has strong adaptability and robustness to the environment and high precision of the constructed map.
In order to achieve the technical purpose, the invention adopts the following specific technical scheme:
a map building method, comprising:
the method comprises the following steps of obtaining sensor information, wherein the sensor information is used for measuring position information, speed information, attitude information and surrounding environment information of a map building device in which the sensor information is located in the motion process through a dot matrix scanning device;
removing the dynamic object, namely removing the dynamic object in the surrounding environment information by using a point cloud segmentation algorithm of a convolutional neural network to obtain estimated environment information;
acquiring mileage count data, which is used for acquiring the speed and the posture of the map construction device in the movement process based on the inertial measurement device;
comparing the information obtained by the sensor information and the information obtained by the mileage counting data, and generating estimated position information, estimated speed information and estimated attitude information;
loop detection, which is used for comparing the estimated environmental information obtained from the last map building device to the circular detection place to obtain deviation correction information when the map building device arrives at the circular detection place;
based on a factor graph optimization frame, substituting estimated position information, estimated speed information, estimated attitude information, estimated environment information and deviation correction information into a world map, and performing nonlinear optimization on accumulated deviation between two times of arrival at the circular detection place according to the deviation correction information to draw the map;
wherein, there are at least three identical scanning points between the adjacent scanning positions of the dot matrix scanning device.
Further, the dot matrix scanning device is a 3D laser radar.
Further, the Inertial Measurement Unit is an Inertial Measurement Unit (IMU).
Further, the point cloud segmentation algorithm is cnn _ seg.
Further, the nonlinear optimization is based on a GTSAM optimization library.
Further, the specific method for dynamic object removal comprises the following steps:
s101, inputting a laser frame of a laser radar into a point cloud segmentation network, and outputting label category information of all scanning points of the laser frame;
s102: and in the label category information of all the scanning points indexed and removed in the laser frame, the scanning points belonging to the dynamic object category are obtained to obtain a dynamic object removal laser frame.
Further, the specific method for dynamic object removal further comprises,
s103: and further indexing and removing the scanning points belonging to the dynamic object category in the dynamic object removal laser frame.
Further, the position, speed and posture of the map building device obtained by the IMU in the movement process are calculated by the following formulas:
Figure GDA0003959864520000041
Figure GDA0003959864520000042
wherein: v is velocity, p is position, R is attitude rotation matrix, and g is gravitational acceleration constant.
Further, the method also comprises the following steps:
IMU position and attitude calibration, wherein the IMU is calibrated by removing noise of the IMU to obtain the speed and attitude of the map building device in the motion process.
Further, the specific method for information comparison is as follows:
s201, extracting point characteristics and surface characteristics from the laser frame, substituting position information and attitude information, fusing the point characteristics and the surface characteristics with the world map, and generating a local line characteristic map and/or a local surface characteristic map;
s202: combining all local line feature maps and/or local surface feature maps according to the position, speed and posture of a map construction device obtained by the IMU in the motion process to form a single-frame map;
s203: estimating the IMU zero offset by using a calculated estimate function in a GTSAM optimization library, wherein the generated estimated value is a pre-offset value, and the pre-offset value is used as the space offset of a single-frame map of the next laser frame of the laser radar.
Meanwhile, the invention also provides a laser inertia odometer which comprises a dot matrix scanning device and an inertia measuring device, wherein the dot matrix scanning device and the inertia measuring device are used for data interaction; the inertial measurement unit and the scanning probe of the dot matrix scanning unit are fixed in space.
By adopting the technical scheme, the invention can bring the following beneficial effects:
1. the invention realizes a high-precision high-quality point cloud map construction method, has strong adaptability and robustness to the environment, and the constructed map has high precision;
2. the invention adopts the factor graph as the nonlinear optimization rear end, is designed for realizing multi-sensor fusion conveniently on the whole structure, and has strong expandability. Meanwhile, the advantages of the sensors can be complemented, and the drawing construction precision and the environmental adaptability are further improved;
3. the invention has a dynamic object removing module, can effectively avoid the smear phenomenon of dynamic objects in the map, and has high map building quality. Meanwhile, the module can prevent the front end from being interfered by dynamic objects in the matching process, and the accuracy of the front end odometer is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic overall flow chart of a map construction method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of the nonlinear optimization in the embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. In addition, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to or other than one or more of the aspects set forth herein.
It should be further noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the drawings only show the components related to the present invention rather than being drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of each component in actual implementation can be changed freely, and the layout of the components can be more complicated.
In addition, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
In an embodiment of the present invention, a map construction method is provided, including:
the method comprises the following steps of obtaining sensor information, wherein the sensor information is used for measuring position information, speed information, attitude information and surrounding environment information of a map building device in which the sensor information is located in the motion process through a dot matrix scanning device;
the dynamic object removing step is used for removing dynamic objects in the surrounding environment information by using a point cloud segmentation algorithm of a convolutional neural network to obtain estimated environment information;
acquiring odometer data, namely acquiring odometer position information, odometer speed information and odometer posture information of the map construction device in the movement process based on the inertia measurement device;
comparing the information, namely comparing the information obtained by acquiring the sensor information with the information obtained by acquiring the odometer data, and generating estimated position information, estimated speed information and estimated attitude information;
loop detection, which is used for comparing the estimated environmental information obtained from the last map building device to the circulating detection place to obtain deviation correction information when the map building device reaches the circulating detection place;
nonlinear optimization and map drawing, wherein estimated position information, estimated speed information, estimated attitude information, estimated environment information and deviation correction information are substituted into a world map based on a factor graph optimization framework, and the map is drawn by performing nonlinear optimization on accumulated deviation between two times of arrival at the cyclic detection site according to the deviation correction information;
wherein, there are at least three identical scanning points between the adjacent scanning positions of the dot matrix scanning device.
In this embodiment, the matrix scanning device is a laser radar, which is a short for laser detection and ranging system, and is composed of a transmitter, an antenna, a receiver, a tracking frame, and an information processing system, and the principle of the laser radar is similar to that of a conventional radar. By setting a fixed scanning frequency, position information, speed information, posture information and surrounding environment information of the map construction device in the movement process can be obtained by recording and comparing scanning results before and after the movement process for multiple times;
in this embodiment, the inertial measurement unit is an IMU (inertial sensor), and the IMU of this embodiment should further include a sensor capable of outputting acceleration and angular velocity information, such as an AHRS (Attitude and Heading Reference System) and a GNSS (Global Navigation Satellite System).
In this embodiment, the point cloud segmentation algorithm is a network having functions of inputting a laser frame point cloud and outputting a class label of each point, such as cnn _ seg. Nonlinear optimization relies on a GTSAM optimization library.
In this embodiment, the specific method for dynamic object removal includes the following steps:
s101, inputting a laser frame of a laser radar into a point cloud segmentation network, and outputting label category information of all scanning points of the laser frame;
s102: and in the label type information of all the scanning points indexed and removed in the laser frame, the scanning points belonging to the dynamic object type are obtained, and the dynamic object removal laser frame is obtained.
In this embodiment, the specific method for dynamic object removal further comprises,
s103: and further indexing and removing the scanning points belonging to the dynamic object class in the dynamic object removal laser frame.
In this embodiment, the position, velocity and attitude of the mapping device obtained by the IMU during motion are calculated by the following formulas:
Figure GDA0003959864520000101
Figure GDA0003959864520000102
wherein: v is velocity, p is position, R is attitude rotation matrix, and g is gravitational acceleration constant.
This embodiment still includes:
and IMU position and attitude calibration, namely calibrating the speed and attitude of the map building device in the motion process by removing the noise of the IMU.
In this embodiment, the specific method for comparing information is as follows:
s201, extracting point characteristics and surface characteristics from a laser frame, substituting position information and attitude information, fusing the point characteristics and the surface characteristics with a world map, and generating a local line characteristic map and/or a local surface characteristic map;
s202: combining all local line feature maps and/or local surface feature maps according to the position, speed and posture of a map construction device obtained by the IMU in the motion process to form a single-frame map;
s203: and estimating the zero offset of the IMU by using a calcutateEstimate function, wherein the generated estimated value is a pre-offset value, and the pre-offset value is used as the space offset of a single-frame map of the next laser frame of the laser radar.
The embodiment also provides a laser inertia odometer, which comprises a dot matrix scanning device and an inertia measuring device, wherein the dot matrix scanning device is used for data interaction; the inertial measurement unit and the scanning probe of the dot matrix scanning unit are fixed in space.
The specific method for constructing the high-precision and high-quality point cloud map by combining the laser inertia odometer comprises the following steps:
the high-precision high-quality point cloud map construction method comprises the following four parts: dynamic object removal, odometer front-end processing (sensor information acquisition), non-linear optimized back-end processing (non-linear optimization), loopback detection, and map building blocks. In the general flow chart of this embodiment, as shown in fig. 1, sensor information is first input to the dynamic object removal module, and the output laser frame data without the dynamic object and other sensor data are used as input of subsequent modules. The front-end module gives the pose relationship among the nodes in real time, and the rear-end module performs nonlinear optimization on the pose relationship among the nodes with errors and noises. Meanwhile, the loop detection module judges whether to return to a place which is arrived previously or not according to the sensor information and the existing node relation, and adds corresponding constraint to the rear end. And finally, outputting the optimized node pose information by the rear end, and integrating the node pose information into a map by combining laser frame data.
The following description is given in steps:
1) Dynamic object removal
A large number of dynamic objects such as pedestrians and vehicles often exist in the environment, if the data collected by the sensors are directly mapped, the quality is affected due to the fact that the maps are smeared, and meanwhile the accuracy of front-end matching is affected by the dynamic objects. Therefore, the treatment is required first.
The dynamic object removing module is mainly implemented by a point cloud segmentation algorithm based on a convolutional neural network, such as cnn _ seg and the like, for removing points represented by dynamic objects from a laser frame obtained by a laser radar. The network input of the point cloud segmentation is a laser frame obtained by a laser radar, and the output is label category information to which all points in the frame belong. After the label category information is obtained, points belonging to the dynamic object category are indexed and directly removed from a laser frame obtained by the laser radar, and the result at the moment is used as an output result of the dynamic object removal module.
Because the point cloud segmentation algorithm often has a certain omission factor, the dynamic object removal module can perform detection and removal operation once on the laser frame which is used for building the map after the rear end is optimized so as to ensure the quality of the map.
2) Odometer front-end processing
The odometer front-end module is used for providing pose information of the current sensor, and the positioning accuracy of the odometer front-end module determines the accuracy of the final map building to a great extent. The embodiment provides a laser inertia odometer method with tightly coupled IMU and laser radar, which can provide accurate pose information.
The IMU can directly give the acceleration of itselfThe values and the angular velocity values can obtain the change conditions of the self position, the speed and the posture within a period of time delta t through integration. In order to avoid recalculating all integral values after updating the estimated IMU zero offset, the present embodiment determines its position, velocity and attitude by using a pre-integration method, and is expressed by a formula
Figure GDA0003959864520000121
Figure GDA0003959864520000122
Where v is velocity, p is position, R is attitude rotation matrix, and g is gravitational acceleration constant.
Because the data of the IMU often has certain noise and is amplified after high-frequency integration, the relative position and attitude values between the nodes obtained by pre-integration need to be further calibrated. Pose T obtained by pre-integrating IMU in the embodiment i And the final result of the laser frame matching, namely the pose relationship between adjacent nodes, is used as the output of the front end of the odometer.
Laser frame matching uses a feature and local map fusion based approach. Feature extraction is based on the roughness of points, let p i For a point at which roughness is to be calculated, the set S is and i set of adjacent points in the same laser channel (i.e., with the same ring value), half of each point being at p i The roughness can be calculated by the formula:
Figure GDA0003959864520000131
by setting a threshold value c l And c h Points can be divided into line features (c)>c h ) Kneading feature (c)<c l ). Respectively extracting line and plane characteristics of the laser frames of the nearest n nodes, and utilizing the pose information { T ] of the nodes i-n ,T i-n+1 ,…,T i-1 And converting the feature points to a uniform world coordinate system W for fusion to obtain a locally fused line feature map and a locally fused surface feature map respectively.
After the locally fused line feature map and surface feature map are obtained, IMU (inertial measurement Unit) can be used for predictionAnd taking the position and the attitude obtained after integration as initial values to carry out matching on the laser frame. Firstly, respectively extracting line and surface characteristics of a current laser frame, and then obtaining a pose T by utilizing IMU (inertial measurement Unit) pre-integration i And transforming the line and plane characteristics to a world coordinate system W. For the k-th line feature, two points closest to the feature point in the local fused line feature map are searched and recorded as
Figure GDA0003959864520000132
For the k-th surface feature, finding the three points closest to the feature point in the locally fused surface feature map, and recording as
Figure GDA0003959864520000133
Next, determining the distance error value of the line and surface features: for the kth line feature, distance error
Figure GDA0003959864520000134
Is the feature point to
Figure GDA0003959864520000135
The distance of the determined straight line; distance error for k-th surface feature
Figure GDA0003959864520000136
Is the feature point to
Figure GDA0003959864520000137
The distance of the determined plane. Finally, the following function is optimized
Figure GDA0003959864520000138
Node pose information T output by the front end of the odometer can be obtained i
Obtaining node pose information T by laser frame matching i And then, estimating the IMU zero offset by utilizing a calculated estimate function in the ISAM2 class of the GTSAM optimized library, wherein the estimated result is used as an offset value used in the next IMU pre-integration.
3) Non-linear optimized back-end processing
This exampleA factor graph is used as a nonlinear optimization back end, a GTSAM open source optimization library is adopted in specific implementation, and a schematic diagram of the GTSAM open source optimization library is shown in FIG. 2. Wherein l is a road sign, x is a pose node, z is observation data, and the pose node x 1 The next pose node x is generated under the action of the motion equation f 2 . The constraints of the motion equation f on the position node x include the pose constraint generated by the IMU pre-integration and the pose constraint generated by the laser frame matching, and the loop detection constraint described below. The waypoints l and observation z may then be the positioning results given by GPS or RTK. Note that the motion equation f, landmark points l, and observation z may be various, not limited to the illustrated examples, for example, the motion equation f may also be a constraint given by a wheel speed odometer (wheel odometer), and the landmark points l and observation z may also be feature points extracted through a camera observation environment and corresponding relationship constraints. Therefore, the embodiment can conveniently and effectively realize multi-sensor fusion.
4) Loop detection
The odometer front end part of the embodiment realizes relative positioning, namely the next position x i+1 Pose transformation constraint T i+1 Is relative to the current node x i In (1). Accumulated errors are inevitably introduced over time, so the embodiment introduces a loop detection module, which aims to judge whether to return to a place which is arrived previously or not through sensor information and existing node relation, and adds corresponding constraint to the back end to eliminate the accumulated errors.
The loop detection module of the present embodiment is based on distance detection and laser frame matching. For new state node x i+1 First by the Euclidean distance | · |) 2 In the existing state node { x 1 ,x 2 ,…,x i Find the node with the minimum distance in the } set as x l . Then, fusing m nodes { x) before and after the found node l-m ,…,x l ,…,x l+m And obtaining a locally fused line feature map and a locally fused surface feature map by using the laser frame. The fusion method is the same as the method based on feature and local map fusion adopted by laser frame matching at the front end of the odometer in the step (2).
Obtain locally fused line featuresAfter the map and the surface feature map are processed, a new state node x can be calculated by using a method of (2) matching laser frames at the front end of the odometer i+1 And loopback node x l Relative transformation of (T) l,i+1 . The relative transformation T l,i+1 The constraint can be added to a factor graph at the back end of the nonlinear optimization (3) as the constraint of the motion equation f, and unified optimization is carried out, so that the global accumulated error is reduced.
5) Map construction
After all sensor data are processed, and (3) the nonlinear optimization back end optimizes all nodes, each node { x ] can be obtained 1 ,x 2 ,…,x n Pose transformation information of { T } 1 ,T 2 ,…,T n }. At this time, the node belongs to each node { x 1 ,x 2 ,…,x n And (3) carrying out dynamic object detection and removal on the laser frame through the dynamic object removal module (1). Then, according to { T } 1 ,T 2 ,…,T n And converting the laser frames of all the nodes into a uniform world coordinate system W, and directly overlapping all the laser frames to obtain the environment-related high-precision high-quality point cloud map finally constructed in the embodiment.
As shown in fig. 1, the sensor information of this embodiment is first input to the dynamic object removal module, and the output laser frame data without the dynamic object and other sensor data are used as the input of the subsequent modules. The front-end module gives the pose relationship among the nodes in real time, and the rear-end module performs nonlinear optimization on the pose relationship among the nodes with errors and noises. Meanwhile, the loop detection module judges whether to return to a place which is arrived previously or not according to the sensor information and the existing node relation, and adds corresponding constraint to the rear end. And finally, outputting the optimized node pose information by the rear end, and integrating the node pose information into a map by combining laser frame data.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (7)

1. A map construction method, comprising:
the method comprises the following steps of obtaining sensor information, wherein the sensor information is used for measuring position information, speed information, attitude information and surrounding environment information of a map building device in which the sensor information is located in the motion process through a dot matrix scanning device;
removing the dynamic object, namely removing the dynamic object in the surrounding environment information by using a point cloud segmentation algorithm of a convolutional neural network to obtain estimated environment information;
acquiring mileage count data, wherein the mileage count data is used for acquiring the speed and the posture of the map building device in the movement process based on the inertia measuring device;
comparing the information obtained by the sensor information and the information obtained by the mileage counting data to generate estimated position information, estimated speed information and estimated attitude information;
loop detection, which is used for comparing the estimated environmental information obtained from the last map building device to the circulating detection place to obtain deviation correction information when the map building device reaches the circulating detection place;
based on a factor graph optimization frame, substituting estimated position information, estimated speed information, estimated attitude information, estimated environment information and deviation correction information into a world map, and performing nonlinear optimization on accumulated deviation between two times of arrival at the circular detection place according to the deviation correction information to draw the map;
at least three same scanning points exist between adjacent scanning positions of the dot matrix scanning device;
the point cloud segmentation algorithm is cnn _ seg; the nonlinear optimization is based on a GTSAM optimization library;
the inertia measurement device is an inertia measurement unit, namely an IMU;
the specific method for information comparison comprises the following steps:
s201, extracting point characteristics and surface characteristics from a laser frame, substituting position information and attitude information, fusing the point characteristics and the surface characteristics with the world map, and generating a local line characteristic map and/or a local surface characteristic map;
s202: combining all the local line feature maps and/or the local surface feature maps according to the position, the speed and the posture of the map construction device obtained by the IMU in the movement process to form a single-frame map;
s203: and estimating the IMU zero offset by using a calcutateEstimate function in the GTSAM optimization library, wherein the generated estimated value is a pre-deviation value, and the pre-deviation value is used as the spatial deviation of a single-frame map of the next laser frame of the laser radar.
2. The map construction method according to claim 1, characterized in that: the dot matrix scanning device is a 3D laser radar.
3. The map building method according to claim 2, wherein the specific method for dynamic object removal comprises the following steps:
s101, inputting a laser frame of a laser radar into a point cloud segmentation network, and outputting label category information of all scanning points of the laser frame;
s102: and in the label category information of all the scanning points indexed and removed in the laser frame, the scanning points belonging to the dynamic object category are obtained to obtain a dynamic object removal laser frame.
4. The mapping method according to claim 3, wherein the specific method of dynamic object removal further comprises
S103: and further indexing and removing the scanning points belonging to the dynamic object category in the dynamic object removal laser frame.
5. The mapping method of claim 1, wherein the position, velocity and attitude of the IMU-derived mapping device during motion are calculated by the following equations:
Figure FDA0003959864510000031
Figure FDA0003959864510000032
wherein: v is velocity, p is position, R is attitude rotation matrix, and g is gravitational acceleration constant.
6. The mapping method according to claim 1, further comprising:
IMU position and attitude calibration, wherein the IMU is calibrated to obtain the speed and attitude of the mapping device during movement by removing noise of the IMU.
7. A laser odometer for a method of mapping according to any of claims 1 to 6, wherein: the device comprises a dot matrix scanning device and an inertia measuring device which are used for data interaction; the inertial measurement unit and the scanning probe of the dot matrix scanning unit are fixed in space.
CN202110618535.5A 2021-06-03 2021-06-03 Map construction method and laser inertia odometer Active CN113358112B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110618535.5A CN113358112B (en) 2021-06-03 2021-06-03 Map construction method and laser inertia odometer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110618535.5A CN113358112B (en) 2021-06-03 2021-06-03 Map construction method and laser inertia odometer

Publications (2)

Publication Number Publication Date
CN113358112A CN113358112A (en) 2021-09-07
CN113358112B true CN113358112B (en) 2023-01-17

Family

ID=77531894

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110618535.5A Active CN113358112B (en) 2021-06-03 2021-06-03 Map construction method and laser inertia odometer

Country Status (1)

Country Link
CN (1) CN113358112B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113850864B (en) * 2021-09-14 2024-04-12 中南大学 GNSS/LIDAR loop detection method for outdoor mobile robot
CN114018284B (en) * 2021-10-13 2024-01-23 上海师范大学 Wheel speed odometer correction method based on vision
CN114170320B (en) * 2021-10-29 2022-10-28 广西大学 Automatic positioning and working condition self-adaption method of pile driver based on multi-sensor fusion
CN115200588B (en) * 2022-09-14 2023-01-06 煤炭科学研究总院有限公司 SLAM autonomous navigation method and device for mobile robot
CN115326068B (en) * 2022-10-17 2023-01-24 北京理工大学 Design method and system for laser radar-inertial measurement unit fusion odometer

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570454B (en) * 2016-10-10 2019-06-11 同济大学 Pedestrian traffic parameter extracting method based on mobile laser scanning
IL250382B (en) * 2017-01-31 2021-01-31 Arbe Robotics Ltd A radar-based system and method for real-time simultaneous localization and mapping
CN110261870B (en) * 2019-04-15 2021-04-06 浙江工业大学 Synchronous positioning and mapping method for vision-inertia-laser fusion
CN111105495A (en) * 2019-11-26 2020-05-05 四川阿泰因机器人智能装备有限公司 Laser radar mapping method and system fusing visual semantic information
CN111583136B (en) * 2020-04-25 2023-05-23 华南理工大学 Method for simultaneously positioning and mapping autonomous mobile platform in rescue scene
CN111929699B (en) * 2020-07-21 2023-05-09 北京建筑大学 Laser radar inertial navigation odometer considering dynamic obstacle and map building method and system

Also Published As

Publication number Publication date
CN113358112A (en) 2021-09-07

Similar Documents

Publication Publication Date Title
CN113358112B (en) Map construction method and laser inertia odometer
CN110160542B (en) Method and device for positioning lane line, storage medium and electronic device
CN110412635B (en) GNSS/SINS/visual tight combination method under environment beacon support
CN109341706B (en) Method for manufacturing multi-feature fusion map for unmanned vehicle
CN114526745B (en) Drawing construction method and system for tightly coupled laser radar and inertial odometer
JP5162849B2 (en) Fixed point position recorder
CN111652179A (en) Semantic high-precision map construction and positioning method based on dotted line feature fusion laser
CN112162297B (en) Method for eliminating dynamic obstacle artifacts in laser point cloud map
CN114323033B (en) Positioning method and equipment based on lane lines and feature points and automatic driving vehicle
JP4984659B2 (en) Own vehicle position estimation device
CN112859051A (en) Method for correcting laser radar point cloud motion distortion
CN113933818A (en) Method, device, storage medium and program product for calibrating laser radar external parameter
CN114565674B (en) Method and device for purely visually positioning urban structured scene of automatic driving vehicle
CN116452763A (en) Three-dimensional point cloud map construction method based on error Kalman filtering and factor graph
CN115127543A (en) Method and system for eliminating abnormal edges in laser mapping optimization
CN114119886A (en) High-precision map point cloud reconstruction method and device, vehicle, equipment and storage medium
CN110515110B (en) Method, device, equipment and computer readable storage medium for data evaluation
CN115930977A (en) Method and system for positioning characteristic degradation scene, electronic equipment and readable storage medium
CN115183762A (en) Airport warehouse inside and outside mapping method, system, electronic equipment and medium
CN113838129B (en) Method, device and system for obtaining pose information
CN117388830A (en) External parameter calibration method, device, equipment and medium for laser radar and inertial navigation
Liu et al. Integrated velocity measurement algorithm based on optical flow and scale-invariant feature transform
CN115468576A (en) Automatic driving positioning method and system based on multi-mode data fusion
CN115963508A (en) Tightly-coupled SLAM method and system for laser radar and IMU
Nguyen et al. Mcd: Diverse large-scale multi-campus dataset for robot perception

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A Map Construction Method and a Laser Inertial Odometer

Effective date of registration: 20230727

Granted publication date: 20230117

Pledgee: Haidian Beijing science and technology enterprise financing Company limited by guarantee

Pledgor: Beijing Chaoxing Future Technology Co.,Ltd.

Registration number: Y2023110000311

PC01 Cancellation of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Granted publication date: 20230117

Pledgee: Haidian Beijing science and technology enterprise financing Company limited by guarantee

Pledgor: Beijing Chaoxing Future Technology Co.,Ltd.

Registration number: Y2023110000311