CN114924288A - System and method for constructing vehicle front three-dimensional digital elevation map - Google Patents
System and method for constructing vehicle front three-dimensional digital elevation map Download PDFInfo
- Publication number
- CN114924288A CN114924288A CN202210581073.9A CN202210581073A CN114924288A CN 114924288 A CN114924288 A CN 114924288A CN 202210581073 A CN202210581073 A CN 202210581073A CN 114924288 A CN114924288 A CN 114924288A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- dimensional
- plane
- point cloud
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60G—VEHICLE SUSPENSION ARRANGEMENTS
- B60G17/00—Resilient suspensions having means for adjusting the spring or vibration-damper characteristics, for regulating the distance between a supporting surface and a sprung part of vehicle or for locking suspension during use to meet varying vehicular or surface conditions, e.g. due to speed or load
- B60G17/015—Resilient suspensions having means for adjusting the spring or vibration-damper characteristics, for regulating the distance between a supporting surface and a sprung part of vehicle or for locking suspension during use to meet varying vehicular or surface conditions, e.g. due to speed or load the regulating means comprising electric or electronic elements
- B60G17/019—Resilient suspensions having means for adjusting the spring or vibration-damper characteristics, for regulating the distance between a supporting surface and a sprung part of vehicle or for locking suspension during use to meet varying vehicular or surface conditions, e.g. due to speed or load the regulating means comprising electric or electronic elements characterised by the type of sensor or the arrangement thereof
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3833—Creation or updating of map data characterised by the source of data
- G01C21/3837—Data obtained from a single source
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Electromagnetism (AREA)
- Computer Networks & Wireless Communication (AREA)
- Automation & Control Theory (AREA)
- Mechanical Engineering (AREA)
- Computer Graphics (AREA)
- Traffic Control Systems (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
The invention provides a construction system and a construction method of a vehicle front three-dimensional digital elevation map, wherein the construction system comprises: the system comprises a data acquisition module, a data processing module, a vehicle pose estimation module and a map construction module, wherein the data acquisition module is used for acquiring three-dimensional point cloud data of a road surface in front of a vehicle; the data processing module is used for removing outliers and down-sampling the three-dimensional point cloud data; the vehicle pose estimation module is used for estimating vehicle pose information in real time according to the three-dimensional point cloud data which is subjected to outlier removal and down-sampling; the map construction module is used for constructing a three-dimensional digital elevation map in front of the vehicle according to the three-dimensional point cloud data which is subjected to outlier removal and down-sampling and the vehicle pose information. The method can acquire the three-dimensional digital elevation map of the road surface in front of the vehicle, can be applied to a vehicle active suspension control system, reduces the time lag of active suspension control, and improves the control effect of the active suspension so as to influence the smoothness and riding comfort of the vehicle.
Description
Technical Field
The invention relates to the technical field of vehicle front road surface working condition identification, in particular to a vehicle front three-dimensional digital elevation map construction system and a vehicle front three-dimensional digital elevation map construction method.
Background
Environmental awareness is a prerequisite for intelligent vehicle implementation of intelligent driving. Aiming at the field of vertical control of vehicles, most of the conventional active suspension systems collect dynamic response signals of the vehicles through acceleration sensors arranged on vehicle bodies or axles, and identify road surfaces through transfer functions of the vehicle systems so as to control the suspension. Although the method has a certain improvement on the smoothness and the comfort of the vehicle, the method can only sense the road surface which is driven behind the vehicle, and has certain hysteresis. The sensing and control effects are not good when facing more complex road surface scenes such as deceleration strips and pits. There are also vision-based road sensing techniques that can identify the characteristics of the road surface in advance and send them to the suspension controller for pre-adjustment of the suspension. However, the scheme based on vision is sensitive to the external environment due to the fact that the scheme is easily shielded by illumination, and robustness is poor.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a construction system and a construction method of a vehicle front three-dimensional digital elevation map.
In order to achieve the purpose, the invention adopts the following specific technical scheme:
the invention provides a construction system of a vehicle front three-dimensional digital elevation map, which comprises: the data acquisition module is used for acquiring three-dimensional point cloud data of a road surface in front of the vehicle; the data processing module is used for removing outliers and down-sampling the three-dimensional point cloud data; the vehicle pose estimation module is used for estimating vehicle pose information in real time according to the three-dimensional point cloud data which completes outlier removal and down sampling; and the map construction module is used for constructing a three-dimensional digital elevation map in front of the vehicle according to the three-dimensional point cloud data which is subjected to outlier removal and down-sampling and the vehicle pose information.
Preferably, the data acquisition module is a lidar.
Preferably, the data processing module comprises: the point cloud processing unit is used for removing outliers and down-sampling the three-dimensional point cloud data; and the point cloud extraction unit is used for extracting passing area point clouds and non-passing area point clouds from the three-dimensional point cloud data after the outliers are removed and the down-sampling is carried out.
Preferably, the vehicle pose estimation module includes: the characteristic extraction unit is used for extracting the characteristics of the passing area point cloud and the non-passing area point cloud by adopting a local smoothness-based method to obtain plane characteristic points and edge characteristic points; the characteristic point matching unit is used for carrying out characteristic matching on the plane characteristic points of the front frame and the rear frame and carrying out characteristic matching on the edge characteristic points of the front frame and the rear frame; the optimization equation construction unit is used for constructing a plane feature point optimization equation based on the plane feature points completing the feature matching and constructing an edge feature point optimization equation based on the edge feature points completing the feature matching; and the pose change acquisition unit is used for merging the plane feature point optimization equation and the edge feature point optimization equation, and performing optimization solution by using a Levenberg-Marquardt method to obtain the vehicle pose change relation between the two frames.
Preferably, the map building module comprises: the coordinate system conversion unit is used for converting the three-dimensional point cloud data from a laser radar coordinate system to a vehicle coordinate system; the point cloud mapping unit is used for mapping the three-dimensional point cloud data in the vehicle coordinate system to a two-dimensional grid formed by internally drawing an X-Y plane of the vehicle coordinate system; and the map construction unit is used for calculating the height measurement value of the two-dimensional grid according to a preset rule and updating the height measurement value of the two-dimensional grid based on the vehicle pose transformation relation between the front frame and the rear frame.
The invention provides a method for constructing a vehicle front three-dimensional digital elevation map by using the vehicle front three-dimensional digital elevation map constructing system, which comprises the following steps:
s1: acquiring three-dimensional point cloud data of a road surface in front of a vehicle;
s2: removing outliers and down-sampling the three-dimensional point cloud data, and extracting passing area point clouds and non-passing area point clouds;
s3: estimating vehicle pose information in real time according to the three-dimensional point cloud data after outlier removal and down sampling are completed;
s4: and constructing a three-dimensional digital elevation map in front of the vehicle according to the three-dimensional point cloud data which is subjected to outlier removal and down-sampling and the vehicle pose information.
Preferably, step S1 obtains three-dimensional point cloud data of the road surface in front of the vehicle through laser radar scanning.
Preferably, step S3 includes the following sub-steps:
s31: carrying out feature extraction on the point cloud of the passing area to obtain plane feature points, and carrying out feature extraction on the point cloud of the non-passing area to obtain edge feature points; wherein the content of the first and second substances,
the local smoothness is evaluated by defining a parameter f, namely:
wherein the content of the first and second substances,representing the set of all points in each frame of the lidar,point i representing the k-th frame in the lidar coordinate system,point j representing the kth frame in the lidar coordinate system;
sorting the three-dimensional point cloud data subjected to outlier removal and down-sampling according to the f value, selecting 10 points with the largest f value as edge feature points and 25 points with the smallest f value as plane feature points in each scanning line of the laser radar; removing the edge feature points and the plane features which do not meet the conditions according to a preset threshold;
step 32: carrying out feature matching on the plane feature points of the front frame and the rear frame, and carrying out feature matching on the edge feature points of the front frame and the rear frame;
step 33: constructing a plane feature point optimization equation based on the plane feature points completing the feature matching and constructing an edge feature point optimization equation based on the edge feature points completing the feature matching;
the construction of the edge feature point optimization equation comprises the following steps:
step 331: after the pose of the edge feature point i in the kth frame is changed, searching an edge feature point m which is closest to the edge feature point i in the same scanning line of the kth +1 frame and an edge feature point n which is closest to the edge feature point i in an adjacent scanning line of the kth +1 frame;
step 332: forming a straight line l by using the edge feature points m and n;
step 333: calculating the distance d from the edge feature point i to the straight line l i ;
Repeating the steps 331-333 for all the edge feature points, and constructing an edge feature point optimization equation d C ,Wherein N is the number of edge feature points;
the construction of the optimization equation of the plane characteristic points comprises the following steps:
step 331': after the pose of the plane feature point j in the kth frame is changed, searching plane feature points a and b which are closest and next closest to the plane feature point j in the same scanning line of the (k + 1) th frame and a plane feature point q which is closest to the plane feature point j in an adjacent scanning line of the (k + 1) th frame;
step 332': forming a plane e by using the plane characteristic points a, b and q;
step 333': calculating the distance d from the plane characteristic point j to the plane e j ;
Repeating the steps 331 'to 333' for all the plane feature points to construct an optimization equation d of the plane feature points F ,Wherein M is the number of the plane characteristic points;
step 34: merging plane characteristic point optimization equation d F And edge feature point optimization equation d C And optimized solution is carried out by using a Levenberg-Marquardt method to obtain a vehicle pose change relation T between two frames before and after,
preferably, step S4 specifically includes the following sub-steps:
s41: converting the three-dimensional point cloud data subjected to outlier removal and down-sampling from a laser radar coordinate system to a vehicle coordinate system;
s42: the projection of the mass center of the vehicle to the ground is taken as the origin, the X axis points to the front of the vehicle, the Y axis points to the left side of a driver, the Z axis points to the upper side through the mass center of the vehicle, an X-Y plane is divided into two-dimensional grids with preset sizes, three-dimensional point cloud data which completes outlier removal and down sampling are mapped into the two-dimensional grids through coordinate transformation, a new height measurement value is generated on each grid of the two-dimensional grids, and the height measurement values are approximated through Gaussian probabilityp is a measure of the height of the material,is the variance of the height measurement;
the rules that constrain the high updates in each grid are:
the update rule of the variance is as follows:
wherein c is a preset threshold value,as a measure of the height of the grid at time t, z t The updated height measurement for the grid at time t,is a measure of the height of the grid at time t-1,for the variance value of the grid update at time t,is the variance value of the grid at time t-1, d M Is Mahalanobis distance, d M Is defined as:
s43: assuming that the vehicle moves at a constant speed, based on the vehicle pose transformation relation T between the current frame and the previous frame and the height measurement value and the variance value of the grid of the current frame, the height measurement value p and the variance value of the corresponding position in the grid of the next frame are solved by rotating and translating the corresponding position in the grid of the current frame
Preferably, the first and second electrodes are formed of a metal,therein, sigma s Covariance matrix, Σ, for lidar noise model R Covariance matrix for vehicle pose estimation, J s Jacobian matrix for lidar measurements, J R Is the Jacobian matrix of the vehicle motion, and T is the transpose of the matrix.
The invention can obtain the following technical effects:
1. the invention uses a method based on local smoothness to respectively extract the plane characteristic points and the edge characteristic points, so that the characteristic extraction is more stable and the interference is less.
2. The invention respectively uses the methods based on point-to-straight line and point-to-plane to process the edge characteristics and the plane characteristics to construct an optimization equation, so that the pose estimation precision is higher and the robustness is better.
3. According to the method, the uncertainty of the laser radar measurement and the uncertainty influence of the pose estimation are considered, so that the reconstructed three-dimensional map is closer to the real situation.
4. According to the invention, the laser radar is used as a perception sensor, so that the road surface characteristics can be perceived in advance, and the response speed of active suspension control is improved.
Drawings
FIG. 1 is a schematic diagram of a logical structure of a vehicle front three-dimensional digital elevation map construction system provided in accordance with an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a method for constructing a three-dimensional digital elevation map of a vehicle front according to an embodiment of the present invention.
Wherein the reference numerals include: the system comprises a vehicle front three-dimensional digital elevation map construction system 100, a data acquisition module 110, a data processing module 120, a vehicle pose estimation module 130 and a map construction module 140.
Detailed Description
Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. In the following description, the same reference numerals are used for the same blocks. In the case of the same reference numerals, their names and functions are also the same. Therefore, detailed description thereof will not be repeated.
For the purposes of the present invention: technical solutions and advantages of the present invention will be more clearly understood from the following detailed description of the present invention with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
FIG. 1 illustrates a logical architecture of a vehicle front three-dimensional digital elevation mapping system provided in accordance with an embodiment of the present invention.
As shown in fig. 1, a vehicle front three-dimensional digital elevation map constructing system 110 provided in the embodiment of the present invention includes a vehicle front three-dimensional digital elevation map constructing system 100, a data acquiring module 110, a data processing module 120, a vehicle pose estimating module 130, and a map constructing module 140, where the data acquiring module 110 is configured to acquire three-dimensional point cloud data of a road surface in front of a vehicle; the data processing module 120 is configured to perform outlier removal and downsampling on the three-dimensional point cloud data; the vehicle pose estimation module 130 is used for estimating vehicle pose information in real time according to the three-dimensional point cloud data after outlier removal and downsampling are completed; the map construction module 140 is configured to construct a three-dimensional digital elevation map in front of the vehicle according to the three-dimensional point cloud data that is obtained by removing outliers and downsampling and the vehicle pose information.
The data acquisition module 110 is a laser radar, and obtains three-dimensional point cloud data of a road surface in front of the vehicle through scanning of the laser radar.
The data processing module 120 comprises a point cloud processing unit and a point cloud extraction unit, wherein the point cloud processing unit is used for removing outliers and down-sampling three-dimensional point cloud data; the point cloud extraction unit is used for extracting passing area point clouds and non-passing area point clouds from the three-dimensional point cloud data with the outliers removed and the down-sampling.
The vehicle pose estimation module 130 comprises a feature extraction unit, a feature point matching unit, an optimization equation construction unit and a pose change acquisition unit; the characteristic extraction unit is used for extracting the characteristics of the point cloud of the passing area by adopting a method based on local smoothness to obtain plane characteristic points and extracting the characteristics of the point cloud of the non-passing area by adopting a method based on local smoothness to obtain edge characteristic points; the feature point matching unit is used for performing feature matching on the plane feature points of the current frame and the previous frame and performing feature matching on the edge feature points of the current frame and the previous frame; the optimization equation construction unit is used for constructing a plane feature point optimization equation based on the plane feature points completing the feature matching and constructing an edge feature point optimization equation based on the edge feature points completing the feature matching; the pose change acquiring unit is used for merging the plane feature point optimization equation and the edge feature point optimization equation, and performing optimization solution by using a Levenberg-Marquardt method to obtain a vehicle pose change relation between the front frame and the rear frame.
The map building module 140 includes a coordinate system transformation unit, a point cloud mapping unit and a map building unit, wherein the coordinate system transformation unit is used for transforming the three-dimensional point cloud data from the laser radar coordinate system to the vehicle coordinate system; the point cloud mapping unit is used for mapping the three-dimensional point cloud data in the vehicle coordinate system to a two-dimensional grid formed by inscribing in an X-Y plane of the vehicle coordinate system; the map building unit is used for calculating the height of the two-dimensional grid according to a preset rule and updating the height of the two-dimensional grid based on the vehicle pose transformation relation between the front frame and the rear frame.
The above details describe the structure of the system for constructing a three-dimensional digital elevation map in front of a vehicle according to the embodiment of the present invention. Corresponding to the system for constructing the three-dimensional digital elevation map in front of the vehicle, the embodiment of the invention also provides a method for constructing the three-dimensional digital elevation map in front of the vehicle, which is realized based on the system for constructing the three-dimensional digital elevation map in front of the vehicle.
FIG. 2 illustrates a flow chart of a method for constructing a three-dimensional digital elevation map of a vehicle front provided by an embodiment of the invention.
As shown in fig. 2, the method for constructing a three-dimensional digital elevation map in front of a vehicle according to an embodiment of the present invention includes the following steps:
s1: and acquiring three-dimensional point cloud data of a road surface in front of the vehicle.
And scanning by a laser radar to obtain three-dimensional point cloud data of the road surface in front of the vehicle.
S2: and removing outliers and down-sampling the three-dimensional point cloud data, and extracting passing area point clouds and non-passing area point clouds.
S3: and estimating the vehicle pose information in real time according to the three-dimensional point cloud data after outlier removal and down sampling are finished.
Step S3 specifically includes the following substeps:
s31: and extracting the features of the point clouds in the passing areas to obtain plane feature points, and extracting the features of the point clouds in the non-passing areas to obtain edge feature points.
The local smoothness is evaluated by defining a parameter f, namely:
wherein the content of the first and second substances,representing the set of all points in each frame of the lidar,point i representing the k-th frame in the lidar coordinate system,representing point j in the laser radar coordinate system for the k-th frame.
Sorting the three-dimensional point cloud data subjected to outlier removal and down-sampling according to the f value, selecting 10 points with the largest f value as edge feature points in each scanning line of the laser radar, and selecting 25 points with the smallest f value as plane feature points; and removing the edge feature points and the plane features which do not meet the conditions according to a preset threshold value.
Step 32: and performing feature matching on the plane feature points of the current frame and the plane feature points of the previous frame, and performing feature matching on the edge feature points of the current frame and the edge feature points of the previous frame.
Step 33: and constructing a plane feature point optimization equation based on the plane feature points completing the feature matching and constructing an edge feature point optimization equation based on the edge feature points completing the feature matching.
The construction of the optimization equation of the edge feature point comprises the following steps:
step 331: after the pose of the edge feature point i in the kth frame is changed, the edge feature point m which is closest to the edge feature point i in the same scanning line of the (k + 1) th frame and the edge feature point n which is closest to the edge feature point i in the adjacent scanning line of the (k + 1) th frame are searched.
Step 332: and forming a straight line l by the edge feature points m and n.
Step 333: calculating the distance d from the edge feature point i to the straight line l i 。
Repeating the steps 331-333 for all the edge feature points, and constructing an edge feature point optimization equation d C ,Where N is the number of edge feature points.
Calculating the distance from each edge feature point to the straight line l according to the steps 331 to 333, and summing the distances to form an edge feature point optimization equation d C Optimization equation d for edge feature points C I.e. the sum of the distances of all edge feature points to the straight line/.
The construction of the plane characteristic point optimization equation comprises the following steps:
step 331': after the pose of the plane feature point j in the kth frame is changed, searching plane feature points a and b which are closest and next closest to the plane feature point j in the same scanning line of the kth +1 frame and a plane feature point q which is closest to the plane feature point j in an adjacent scanning line of the kth +1 frame;
step 332': forming a plane e by using the plane characteristic points a, b and q;
step 333': calculating the distance d from the plane characteristic point j to the plane e j ;
Repeating the steps 331 'to 333' for all the plane feature points to construct an optimization equation d of the plane feature points F ,Wherein M is the number of the plane feature points.
Calculating the distance from each plane feature point to the straight line e according to steps 331 'to 333', and summing a plurality of distances to form an optimization equation d of the plane feature points F Optimization equation d for edge feature points F I.e. the sum of the distances of all planar feature points to the plane e.
Step 34: merging plane characteristic point optimization equation d F And optimization equation d for edge feature points C And performing optimization solution by using a Levenberg-Marquardt method to obtain a vehicle pose change relation T between the current frame and the previous frame.
S4: and constructing a three-dimensional digital elevation map in front of the vehicle according to the three-dimensional point cloud data which is subjected to outlier removal and down-sampling and the vehicle pose information.
Step S4 specifically includes the following substeps:
s41: converting the three-dimensional point cloud data which is subjected to outlier removal and down-sampling from a laser radar coordinate system to a vehicle coordinate system;
s42: the projection of the mass center of the vehicle to the ground is taken as the origin, the X axis points to the front of the vehicle, the Y axis points to the left side of a driver, the Z axis points to the upper side through the mass center of the vehicle, an X-Y plane is divided into two-dimensional grids with preset sizes, three-dimensional point cloud data which completes outlier removal and down sampling are mapped into the two-dimensional grids through coordinate transformation, a new height measurement value is generated on each grid of the two-dimensional grids, and the height measurement values are approximated through Gaussian probabilityp is a measure of the height of the material,is the variance of the height measurement;
the update rule that constrains the height measurements in each grid is:
the variance updating rule is as follows:
wherein c is a preset threshold value,as a measure of the height of the grid at time t, z t The updated height measurement for the grid at time t,is a measurement of the height of the grid at time t-1,for the variance value of the grid update at time t,is the variance value of the grid at time t-1, d M Is the Mahalanobis distance, d M Is defined as:
s43: assuming that the vehicle moves at a constant speed, based on the vehicle pose transformation relation T between the current frame and the previous frame and the height measurement value and the variance value of the grid of the current frame, the height measurement value p and the variance value of the corresponding position in the grid of the next frame can be conveniently solved by rotating and translating the corresponding position in the grid of the current frame
In which the height measurement p of the grid of corresponding positions remains constant, the variance of the height measurementsUpdating according to the motion uncertainty, namely:
therein, sigma S Covariance matrix, Σ, for lidar noise model R Covariance matrix for vehicle pose estimation, J S Jacobian matrix for lidar measurements, J R Is the Jacobian matrix of vehicle motion, and T is the transpose of the matrix.
The solved height measurement and variance are used as the prior of the height measurement and variance in the next frame grid, i.e. in the process of S42And
the height measurements and variances in the grid are updated by the method described above.
In the description of this specification, reference is made to the term "one embodiment": "some examples": "example": "specific examples": or "some examples" or the like, means the specific features described in connection with the embodiment or example: the structure is as follows: materials or features are included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Moreover, the specific features described: the structure is as follows: the materials or characteristics may be combined in any suitable manner in any one or more of the embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
While embodiments of the present invention have been shown and described above, it is to be understood that the above embodiments are exemplary and not to be construed as limiting the present invention, and that variations may be made in the above embodiments by those of ordinary skill in the art within the scope of the present invention: modification: alternatives and modifications.
The above embodiments of the present invention should not be construed as limiting the scope of the present invention. Any other corresponding changes and modifications made according to the technical idea of the present invention should be included in the protection scope of the claims of the present invention.
Claims (10)
1. An in-vehicle three-dimensional digital elevation map construction system, comprising:
the data acquisition module is used for acquiring three-dimensional point cloud data of a road surface in front of the vehicle;
the data processing module is used for removing outliers and down-sampling the three-dimensional point cloud data;
the vehicle pose estimation module is used for estimating vehicle pose information in real time according to the three-dimensional point cloud data which is subjected to outlier removal and down-sampling;
and the map construction module is used for constructing a three-dimensional digital elevation map in front of the vehicle according to the three-dimensional point cloud data which is subjected to outlier removal and downsampling and the vehicle pose information.
2. The vehicle front three-dimensional digital elevation map construction system of claim 1, wherein the data acquisition module is a lidar.
3. The vehicle front three-dimensional digital elevation map construction system of claim 2, wherein the data processing module comprises:
the point cloud processing unit is used for removing outliers and down-sampling the three-dimensional point cloud data;
and the point cloud extraction unit is used for extracting passing area point clouds and non-passing area point clouds from the three-dimensional point cloud data subjected to outlier removal and down-sampling.
4. The vehicle anterior three-dimensional digital elevation map construction system of claim 3, wherein the vehicle pose estimation module comprises:
the characteristic extraction unit is used for extracting the characteristics of the passing area point cloud and the non-passing area point cloud by adopting a local smoothness-based method to obtain plane characteristic points and edge characteristic points;
the characteristic point matching unit is used for carrying out characteristic matching on the plane characteristic points of the front frame and the rear frame and carrying out characteristic matching on the edge characteristic points of the front frame and the rear frame;
the optimization equation construction unit is used for constructing a plane feature point optimization equation based on the plane feature points completing the feature matching and constructing an edge feature point optimization equation based on the edge feature points completing the feature matching;
and the pose change acquisition unit is used for merging the plane feature point optimization equation and the edge feature point optimization equation, and performing optimization solution by using a Levenberg-Marquardt method to obtain the vehicle pose change relation between the front frame and the rear frame.
5. The vehicle front three-dimensional digital elevation map construction system of claim 4, wherein the map construction module comprises:
the coordinate system conversion unit is used for converting the three-dimensional point cloud data from a laser radar coordinate system to a vehicle coordinate system;
the point cloud mapping unit is used for mapping the three-dimensional point cloud data under the vehicle coordinate system into a two-dimensional grid formed by drawing in an X-Y plane of the vehicle coordinate system;
and the map construction unit is used for calculating the height measurement value of the two-dimensional grid according to a preset rule and updating the height measurement value of the two-dimensional grid based on the vehicle pose transformation relation between the front frame and the rear frame.
6. An in-vehicle three-dimensional digital elevation map construction method implemented by the in-vehicle three-dimensional digital elevation map construction system according to any one of claims 1 to 5, comprising the steps of:
s1: acquiring three-dimensional point cloud data of a road surface in front of a vehicle;
s2: removing outliers and down-sampling the three-dimensional point cloud data, and extracting passing area point clouds and non-passing area point clouds;
s3: estimating vehicle pose information in real time according to the three-dimensional point cloud data after outlier removal and down sampling are completed;
s4: and constructing a three-dimensional digital elevation map in front of the vehicle according to the three-dimensional point cloud data which is subjected to outlier removal and down-sampling and the vehicle pose information.
7. The method for constructing a vehicle front three-dimensional digital elevation map according to claim 6, wherein said step S1 is implemented by scanning with a laser radar to obtain three-dimensional point cloud data of a road surface in front of the vehicle.
8. The method for constructing a vehicle front three-dimensional digital elevation map as claimed in claim 7, wherein step S3 specifically includes the following sub-steps:
s31: performing feature extraction on the passing area point cloud and the non-passing area point cloud by adopting a local smoothness-based method to obtain plane feature points and edge feature points; wherein, the first and the second end of the pipe are connected with each other,
the local smoothness is evaluated by defining a parameter f, namely:
wherein S represents a set of all points in each frame of the lidar,point i representing the k-th frame in the lidar coordinate system,point j representing the kth frame in the lidar coordinate system;
sorting the three-dimensional point cloud data subjected to outlier removal and down-sampling according to the f value, selecting 10 points with the largest f value as edge feature points and 25 points with the smallest f value as plane feature points in each scanning line of the laser radar; removing the edge feature points and plane features which do not meet the conditions according to a preset threshold;
step 32: carrying out feature matching on the plane feature points of the front frame and the back frame, and carrying out feature matching on the edge feature points of the front frame and the back frame;
step 33: constructing a plane feature point optimization equation based on the plane feature points completing the feature matching and constructing an edge feature point optimization equation based on the edge feature points completing the feature matching;
the construction of the edge feature point optimization equation comprises the following steps:
step 331: after the pose of the edge feature point i in the kth frame is changed, searching an edge feature point m which is closest to the edge feature point i in the same scanning line of the (k + 1) th frame and an edge feature point n which is closest to the edge feature point i in an adjacent scanning line of the (k + 1) th frame;
step 332: forming a straight line l by the edge feature points m and n;
step 333: calculating the distance d from the edge feature point i to the straight line l i ;
Repeating the steps 331-333 for all the edge feature points, and constructing the optimization equation d of the edge feature points C ,Wherein N is the number of edge feature points;
the construction of the plane feature point optimization equation comprises the following steps:
step 331': after the pose of the plane feature point j in the kth frame is changed, searching plane feature points a and b which are closest and next closest to the plane feature point j in the same scanning line of the (k + 1) th frame and a plane feature point q which is closest to the plane feature point j in an adjacent scanning line of the (k + 1) th frame;
step 332': forming a plane e by using the plane characteristic points a, b and q;
step 333': calculating the plane feature points j toDistance d of plane e j ;
Repeating the steps 331 'to 333' for all the plane feature points, and constructing the optimization equation d of the plane feature points F ,Wherein M is the number of the plane characteristic points;
9. the method for constructing a vehicle front three-dimensional digital elevation map as claimed in claim 8, wherein step S4 specifically includes the following sub-steps:
s41: converting the three-dimensional point cloud data which is subjected to outlier removal and down-sampling from a laser radar coordinate system to a vehicle coordinate system;
s42: the projection of a vehicle centroid to the ground is taken as an origin, an X axis points to the front of a vehicle, a Y axis points to the left side of a driver, a Z axis points to the upper side through the vehicle centroid, an X-Y plane is divided into two-dimensional grids with preset sizes, three-dimensional point cloud data which are subjected to outlier removal and down sampling are mapped into the two-dimensional grids through coordinate transformation, a new height measurement value is generated in each grid of the two-dimensional grids, and the height measurement values are approximated through Gaussian probabilityp is the measured value of the height,is the variance of the height measurement;
the rules that constrain the high updates in each grid are:
the variance updating rule is as follows:
wherein c is a preset threshold value,as a measure of the height of the grid at time t, z t The updated height measurement for the grid at time t,is a measure of the height of the grid at time t-1,for the variance value of the grid update at time t,is the variance value of the grid at time t-1, d M Is the Mahalanobis distance, d M Is defined as:
s43: assuming that the vehicle moves at a constant speed, based on the vehicle pose transformation relation T between the current frame and the previous frame and the height measurement value and the variance value of the grid of the current frame, the height measurement value p and the variance value of the corresponding position in the grid of the next frame are solved by rotating and translating the corresponding position in the grid of the current frame
10. The method of constructing a vehicle front three-dimensional digital elevation map of claim 9,therein, sigma S Covariance matrix, Σ, for lidar noise model R Covariance matrix for vehicle pose estimation, J S Jacobian matrix for lidar measurements, J R Is the Jacobian matrix of vehicle motion, and T is the transpose of the matrix.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210581073.9A CN114924288A (en) | 2022-05-26 | 2022-05-26 | System and method for constructing vehicle front three-dimensional digital elevation map |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210581073.9A CN114924288A (en) | 2022-05-26 | 2022-05-26 | System and method for constructing vehicle front three-dimensional digital elevation map |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114924288A true CN114924288A (en) | 2022-08-19 |
Family
ID=82811130
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210581073.9A Pending CN114924288A (en) | 2022-05-26 | 2022-05-26 | System and method for constructing vehicle front three-dimensional digital elevation map |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114924288A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115656238A (en) * | 2022-10-17 | 2023-01-31 | 中国科学院高能物理研究所 | Micro-area XRF (X-ray fluorescence) elemental analysis and multi-dimensional imaging method and system |
-
2022
- 2022-05-26 CN CN202210581073.9A patent/CN114924288A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115656238A (en) * | 2022-10-17 | 2023-01-31 | 中国科学院高能物理研究所 | Micro-area XRF (X-ray fluorescence) elemental analysis and multi-dimensional imaging method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110988912B (en) | Road target and distance detection method, system and device for automatic driving vehicle | |
CN108932736B (en) | Two-dimensional laser radar point cloud data processing method and dynamic robot pose calibration method | |
CN113436260B (en) | Mobile robot pose estimation method and system based on multi-sensor tight coupling | |
CN110031829B (en) | Target accurate distance measurement method based on monocular vision | |
CN112001958B (en) | Virtual point cloud three-dimensional target detection method based on supervised monocular depth estimation | |
CN109752701A (en) | A kind of road edge detection method based on laser point cloud | |
CN108345823B (en) | Obstacle tracking method and device based on Kalman filtering | |
CN103413352A (en) | Scene three-dimensional reconstruction method based on RGBD multi-sensor fusion | |
CN115032651A (en) | Target detection method based on fusion of laser radar and machine vision | |
CN113218408B (en) | 2Dslam method and system suitable for multi-sensor fusion of multiple terrains | |
CN115240047A (en) | Laser SLAM method and system fusing visual loopback detection | |
CN110097047B (en) | Vehicle detection method based on deep learning and adopting single line laser radar | |
CN116109601A (en) | Real-time target detection method based on three-dimensional laser radar point cloud | |
CN115620261A (en) | Vehicle environment sensing method, system, equipment and medium based on multiple sensors | |
CN115127543A (en) | Method and system for eliminating abnormal edges in laser mapping optimization | |
CN114924288A (en) | System and method for constructing vehicle front three-dimensional digital elevation map | |
CN113689393A (en) | Three-dimensional target detection algorithm based on image and point cloud example matching | |
CN106709432B (en) | Human head detection counting method based on binocular stereo vision | |
CN113487631B (en) | LEGO-LOAM-based adjustable large-angle detection sensing and control method | |
CN113536959A (en) | Dynamic obstacle detection method based on stereoscopic vision | |
WO2021063756A1 (en) | Improved trajectory estimation based on ground truth | |
CN114897967B (en) | Material form identification method for autonomous operation of excavating equipment | |
CN116030130A (en) | Hybrid semantic SLAM method in dynamic environment | |
CN114353779B (en) | Method for rapidly updating robot local cost map by adopting point cloud projection | |
CN116052099A (en) | Small target detection method for unstructured road |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |