CN109099901B - Full-automatic road roller positioning method based on multi-source data fusion - Google Patents
Full-automatic road roller positioning method based on multi-source data fusion Download PDFInfo
- Publication number
- CN109099901B CN109099901B CN201810670215.2A CN201810670215A CN109099901B CN 109099901 B CN109099901 B CN 109099901B CN 201810670215 A CN201810670215 A CN 201810670215A CN 109099901 B CN109099901 B CN 109099901B
- Authority
- CN
- China
- Prior art keywords
- road roller
- point
- point cloud
- laser radar
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Multimedia (AREA)
- Optical Radar Systems And Details Thereof (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a full-automatic road roller positioning method based on multi-source data fusion, which comprises the steps of setting a boundary mark at the boundary of a construction area, acquiring an image containing the boundary mark by a multi-path synchronous camera, acquiring point cloud data of the construction area by utilizing a laser radar, matching the image with the point cloud data of the laser, identifying the mark of the construction area, and calculating a construction fence area; and (3) fusing and calculating the position and attitude information of the road roller in the construction area by using the laser point cloud data and the IMU output information, and realizing the positioning of the unmanned road roller in the construction area. By the aid of the machine vision positioning method, visual data and IMU data are fused, the method can adapt to special construction environments of the road roller, the problem of signal loss caused by pure dependence on GPS positioning is effectively solved, the positioning and path planning requirements of the unmanned road roller in the construction environments can be met in the aspects of positioning accuracy and speed, the method is effectively applied to the construction site of the road roller, and the problem of poor positioning accuracy of the road roller is effectively solved.
Description
Technical Field
The invention relates to a full-automatic road roller positioning method based on multi-source data fusion.
Background
Engineering machinery is often constructed in disaster sites such as earthquakes, floods, tsunamis and the like and under severe working conditions such as high temperature, high cold, high altitude and high radiation, not only is the construction efficiency low, but also operators often need to be in danger of life. The unmanned technology is utilized to optimize the existing engineering machinery, so that the construction quality can be improved, the personnel cost is reduced, the potential safety hazard is reduced, and an important support is provided for realizing industrial updating. The compaction device is particularly directed to a road roller in engineering machinery, belongs to the field of road equipment, is widely applied to filling compaction operation of large engineering projects such as highways, railways, airport runways, dams, stadiums and the like, can be used for rolling sandy, semi-viscous and viscous soil, roadbed stable soil and asphalt concrete pavement layers, and has extremely wide construction requirements and higher operation difficulty. The standardized construction of the road roller is also a problem which needs to be solved urgently by construction units and machinery production plants all over the world at present, and the full-automatic road roller provides effective guarantee for solving the problem.
The premise for realizing the autonomous construction of the full-automatic road roller is to accurately obtain the self position. At present, the main positioning method of the engineering machinery is based on GPS positioning or the fusion positioning of the GPS and the odometer. Because of relying on GPS equipment to fix a position, its own has the error, simultaneously under the condition that the signal shelters from, for example special operating mode environment such as tunnel, luxurious urban area, the GPS signal can weaken rapidly even lose to can't accomplish the automation construction who satisfies the operation precision.
Most of the existing unmanned road rollers have the defects of low positioning precision, high cost and the like. Patent publication No. CN106127177A discloses an unmanned road roller, adopts GPS to fix a position the road roller, has not considered the problem that the GPS signal can't receive when the road roller is under the construction operation of special environment, leads to the problem that can't operate in special environment such as tunnel, bridge.
Machine vision is an important branch of computer science, integrates optical, mechanical, electronic, computer software and hardware and other technologies, and relates to the fields of computers, image processing, mode recognition, artificial intelligence, signal processing, optical, mechanical and electrical integration and the like. The visual navigation is a technology for obtaining carrier pose parameters by correspondingly processing images acquired by a visual sensor. At present, the visual navigation technology is mainly applied to four aspects of racing competitions of mobile robots, autonomous navigation of industrial AGV and intelligent vehicles and national defense technology research. Patent publication No. CN104835173A proposes a positioning method based on machine vision, but the method realizes AGV positioning through a vehicle-mounted vision system, and the method realizes vision positioning by means of a camera alone, and the stability is not enough.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a full-automatic road roller positioning method based on multi-source data fusion.
The purpose of the invention is realized by the following technical scheme:
a full-automatic road roller positioning method based on multi-source data fusion is characterized by comprising the following steps: setting boundary marks on the boundaries of the construction area as mark points in the fusion positioning assistance of laser point cloud data and image data, acquiring images containing the boundary marks by a multi-path synchronous camera, acquiring point cloud data of the construction area by using a laser radar, matching the images with the laser point cloud data, identifying the marks of the construction area, and calculating a construction fence area; and (3) fusing and calculating the position and attitude information of the road roller in the construction area by using the laser point cloud data and the IMU output information, and realizing the positioning of the unmanned road roller in the construction area.
Further, the full-automatic road roller positioning method based on multi-source data fusion is characterized in that a construction area boundary marker is arranged on a construction site, an industrial personal computer and an inertial navigation unit are installed on the road roller, cameras used for acquiring image information of the construction area of the road roller are installed on the periphery of the road roller, and a three-dimensional laser radar is arranged on the front side of the road roller.
Further, cameras are distributed on the front side, the rear side, the left side and the right side of the road roller, are matched with wide-angle lenses to cover the range of 360 degrees around the road roller, are calibrated by using the checkerboard, obtain camera internal parameters and perform distortion correction on the cameras;
the three-dimensional laser radar scans a target with a radius of 100 meters in a vertical direction of 30 degrees and a horizontal direction of 360 degrees, and marks of the motion track of the road roller are obtained in a continuous point cloud data matching mode;
the construction area boundary marker is a traffic cone, and effective identification and distance measurement of the traffic cone are obtained by using a method of matching images with point cloud data, so that the construction area boundary is surveyed;
the inertial navigation unit acquires the speed and position information of the road roller, and the laser radar information and the inertial navigation information are fused by using the extended Kalman filter, so that the synchronous composition and positioning of the unmanned road roller are realized.
Further, in the full-automatic road roller positioning method based on multi-source data fusion, the traffic cone is a triangular cone and is used as a boundary marker of a construction area, three-dimensional laser point clouds and images of the triangular cone at two viewpoints are respectively acquired through a three-dimensional laser radar and a camera, and point cloud semantics of the triangular cone marker are acquired by utilizing a calibration relation between established laser point cloud data and image data and are used as a vehicle positioning reference basis.
Further, the full-automatic road roller positioning method based on multi-source data fusion is characterized in that three-dimensional point cloud data of a scene are obtained by using a three-dimensional laser radar, three-dimensional SLAM composition is realized by using a LOAM method, and in the LOAM method, coordinate transformation is calculated after feature points are extracted for matching; firstly, point cloud and IMU data are preprocessed for extracting feature points: the points of one scan are classified by curvature values, and the formula is as follows:
wherein, the { L } is a three-dimensional coordinate system of the laser radar and originates from the geometric center of the laser radar, the x axis points to the left, the y axis points to the upper, and the z axis points to the front; p is a radical ofkRepresenting the perceived point cloud during scan k, { LkThe midpoint i, i ∈ pkIs expressed asS is a group of continuous points returned by the laser scanner in the same scanning process;
sorting the points in the scanning according to the C value and selecting the characteristic points, wherein the maximum value point of the C is selected as an edge point, and the minimum value point of the C is selected as a plane point; dividing one-time scanning into 4 independent sub-areas to enable feature points to be uniformly distributed in an environment, providing 2 edge points and 4 plane points at most for each sub-area, then carrying out registration between two adjacent frames of point cloud data, namely completing correlation of point cloud data at the time t and the time t +1, and estimating relative motion relation of a radar; the point cloud registration process comprises the following steps: for the characteristic line, finding out a nearest point j of the point i in the point cloud at the time t by using a KD tree, and finding out a next nearest point l around the j, so that (j, l) is called as the correspondence of the point i in the point cloud at the time t; for the feature surface, similar to the feature line, first find the closest point j, find l around j, find m around j, and call (j, l, m) the correspondence of point i in the point cloud at time t;
after the registration points are found, obtaining the constraint relation between point clouds at different moments, calculating the distance from the feature points to the corresponding relation, starting from the edge points, and if (j, l) is a corresponding edge line, calculating the distance from the point to the line as follows:
whereinIs the coordinate of point i in L,is the coordinate of the corresponding point j, l of i in the previous moment; if (j, l, m) is the corresponding plane, then the point-to-plane distance is calculated as:
for the parameter to be estimated in the above formulaObtaining a Jaccobian matrix by calculating partial derivatives, and performing motion estimation solving by using an L-M algorithm:
wherein the content of the first and second substances,including rigid movement of the lidar in 6 degrees of freedom,tx,tyand tzAlong the x, y and z axes of the coordinate system { L }, θ, respectivelyx,θyAnd thetazThe rotation angle follows the right-hand rule; each row of f corresponds to a feature point, d includes the corresponding distance, J isA Jaccobian matrix of f,then, by nonlinear iteration, d approaches zero, which yields:
further, the full-automatic road roller positioning method based on multi-source data fusion is characterized in that original data are obtained by utilizing a vehicle-mounted three-dimensional laser radar, and a quasi road sign set is obtained through data filtering and data clustering; then, an iterative closest point algorithm is adopted to search the corresponding relation between the road signs and the road signs in the map, the position offset T and the angle offset r between the two point sets are calculated through the obtained corresponding relation, the position and the attitude of the laser radar are calculated by utilizing the offset, and the position and the attitude of the current laser radar under the global coordinate system are assumed to be represented as state variables (x)L,yL,φL) The first two terms are the position of the laser radar in the global coordinate system, and the third term is phiLIndicating the advancing direction of the laser radar;
wherein (x)k,yk,φk) Is a system state variable, (x)L,k,yL,k,φL,k) For system observation variables, T is the position offset, R is the angle offset, and v is the observation noise with the error matrix R.
Further, according to the full-automatic road roller positioning method based on multi-source data fusion, in the running process of the road roller, due to the limitation of the scanning range of the laser radar, the visual field of the road roller may not have obvious road sign characteristics, and at the moment, the pose estimation of the road roller cannot be carried out; therefore, the pose of the road roller is tracked to keep the pose estimation continuity, the data of the odometer and the laser radar are fused by the extended Kalman filtering, the increment of the position and the direction of the road roller is calculated from the odometer data and is used as an input quantity u ═ deltaS,Δφ)TThe vehicle system state equation is established as follows:
where w is the process white noise with error matrix Q.
Further, according to the full-automatic road roller positioning method based on multi-source data fusion, the camera is a large-target-surface industrial camera.
Further, in the full-automatic road roller positioning method based on multi-source data fusion, the three-dimensional laser radar is arranged on a rack above a steel wheel on the front side of the road roller.
Further, in the full-automatic road roller positioning method based on multi-source data fusion, the three-dimensional laser radar is installed at an inclined angle.
Compared with the prior art, the invention has obvious advantages and beneficial effects, and is embodied in the following aspects:
according to the method, the visual data and the IMU data are fused through a machine vision positioning method, the method can adapt to the special construction environment of the road roller, the problem of signal loss caused by the fact that GPS positioning is only relied on is effectively solved, the positioning precision and the speed can meet the positioning and path planning requirements of the unmanned road roller in the construction environment, and the method can be effectively applied to the construction site of the road roller; the invention effectively solves the problem of poor positioning precision of the road roller, reduces the workload of constructors and improves the labor productivity.
Drawings
FIG. 1: the invention has a structure schematic diagram.
The meanings of the reference symbols in the figures are given in the following table:
Detailed Description
In order to more clearly understand the technical features, objects, and effects of the present invention, specific embodiments will now be described in detail.
The invention relates to a full-automatic road roller positioning method based on multi-source data fusion.A boundary mark is arranged at the boundary of a construction area, a multi-path synchronous camera acquires an image containing the boundary mark, a laser radar acquires point cloud data of the construction area, the image is matched with the point cloud data of the laser, the construction area mark is identified, and a construction fence area is calculated; and (3) fusing and calculating the position and attitude information of the road roller in the construction area by using the laser point cloud data and the IMU output information, and realizing the positioning of the unmanned road roller in the construction area.
As shown in fig. 1, an industrial personal computer 2 is erected in a cockpit of a road roller 1, an inertial navigation unit 5 is fixed at the industrial personal computer 2, cameras 3 for acquiring image information of a construction area of the road roller are installed around the top of the cockpit of the road roller, and a three-dimensional laser radar 4 is arranged on the front side of the road roller 1; and setting a construction area boundary marker on a construction site. The industrial personal computer provides abundant expansion interfaces for the sensor units and processes data of the sensors, and the camera, the three-dimensional laser radar, the inertial navigation unit and the industrial personal computer are connected in a communication mode of issuing and receiving information based on a Robot Operating System (ROS) node, namely in the ROS program process, the node is used for receiving information of other nodes by issuing topic information, and communication among the nodes is completed. The data processing flow is as follows:
The three-dimensional laser radar 4 is arranged on a rack above a front steel wheel of the road roller 1 and is installed at an inclined angle, the three-dimensional laser radar 4 scans targets with the radius of 100 meters in the vertical direction of 30 degrees and the horizontal direction of 360 degrees, and marks of the motion track of the road roller are obtained in a continuous point cloud data matching mode; the accurate positioning of the road roller is realized, and the problems of GPS signal loss and the like are solved.
The construction area boundary marker is a traffic cone, and effective identification and distance measurement of the traffic cone are obtained by using a method of matching images with point cloud data, so that the construction area boundary is surveyed; and the accurate positioning of the unmanned road roller in a construction area is realized.
The inertial navigation unit 5 acquires the speed and position information of the road roller, and fuses the laser radar information and the inertial navigation information by using the extended Kalman filter to realize synchronous composition and positioning of the unmanned road roller. And the accurate positioning of the unmanned road roller in a construction area is realized.
When the method is implemented, the following aspects are included:
a) boundary marker detection
The method adopts a target detection method based on deep learning, and specifically adopts a convolutional neural network based on a region, namely a target detection method combining region nomination and the convolutional neural network. The requirements of two aspects of operation speed and accuracy are comprehensively considered, the fast R-CNN model is trained under a cafe frame, and the fast-RCNN introduces the RPN, so that convolution characteristics are shared by region extraction, classification and regression, the calculation precision is guaranteed, and the operation rate is improved.
b) Camera and laser data fusion
The traffic cone is a triangular cone and is used as a boundary marker of a construction area, three-dimensional laser point clouds and images of the triangular cone under two viewpoints are respectively collected through a three-dimensional laser radar and a camera, and point cloud semantics of the triangular cone marker are obtained by utilizing a calibration relation between established laser point cloud data and image data and are used as a vehicle positioning reference basis.
c) LOAM positioning method based on boundary marker point cloud
Acquiring three-dimensional point cloud data of a scene by using a three-dimensional laser radar, realizing three-dimensional SLAM composition by using a LOAM method, and calculating coordinate transformation after extracting feature point matching in the LOAM method; firstly, point cloud and IMU data are preprocessed for extracting feature points: the points of one scan are classified by curvature values, and the formula is as follows:
wherein, the { L } is a three-dimensional coordinate system of the laser radar and originates from the geometric center of the laser radar, the x axis points to the left, the y axis points to the upper, and the z axis points to the front; p is a radical ofkRepresenting the perceived point cloud during scan k, { LkThe midpoint i, i ∈ pkIs expressed asIs a set of consecutive points that the laser scanner returns during the same scan.
Sorting the points in scanning according to the C value and selecting the characteristic points, wherein the maximum value point of C is selected as an edge point, the minimum value point of C is selected as a plane point, one-time scanning is divided into 4 independent sub-areas to enable the characteristic points to be uniformly distributed in the environment, each sub-area provides 2 edge points and 4 plane points at most, then registration between two adjacent frames of point cloud data is carried out, namely the association of the point cloud data at the time t and the time t +1 is completed, and the relative motion relation of the radar is estimated; the point cloud registration process comprises the following steps: for the characteristic line, finding out a nearest point j of the point i in the point cloud at the time t by using a KD tree, and finding out a next nearest point l around the j, so that (j, l) is called as the correspondence of the point i in the point cloud at the time t; for the feature surface, similar to the feature line, first find the closest point j, find l around j, find m around j, and call (j, l, m) the correspondence of point i in the point cloud at time t;
after finding the registration point, obtaining the constraint relationship between the point clouds at different times, calculating the distance from the feature point to the corresponding relationship, starting from the edge point, if (j, l) is the corresponding edge line, then the distance from the point to the line can be calculated as:
whereinIs the coordinate of point i in L,is the coordinates of the corresponding point j, l at the previous time i. If (j, l, m) is the corresponding plane, then the point-to-plane distance is calculated as:
For the parameter to be estimated in the above formulaObtaining a Jaccobian matrix by calculating partial derivatives, and performing motion estimation solving by using an L-M algorithm:
wherein the content of the first and second substances,including rigid movement of the lidar in 6 degrees of freedom,tx,tyand tzAlong the x, y and z axes of the coordinate system { L }, θ, respectivelyx,θyAnd thetazFor rotation angle, follow the right hand rule. Each row of f corresponds to a feature point, d includes the corresponding distance, J isA Jaccobian matrix of f,then, by nonlinear iteration, d approaches zero, which yields:
d) continuous estimation of road roller motion attitude
Obtaining original data by using a vehicle-mounted three-dimensional laser radar, and obtaining a quasi road sign set through data filtering and data clustering; then, an iterative closest point algorithm is adopted to search the corresponding relation between the road signs and the road signs in the map, the position offset T and the angle offset r between the two point sets are calculated through the obtained corresponding relation, the position and the attitude of the laser radar are calculated by utilizing the offset, and the position and the attitude of the current laser radar under the global coordinate system are assumed to be represented as state variables (x)L,yL,φL) The first two terms are the position of the laser radar in the global coordinate system, and the third term is phiLIndicating the advancing direction of the laser radar;
wherein (x)k,yk,φk) Is a system state variable, (x)L,k,yL,k,φL,k) For system observation variables, T is the position offset, R is the angle offset, and v is the observation noise with the error matrix R.
In the running process of the road roller, due to the limitation of the scanning range of the laser radar, the visual field of the road roller may not have obvious road sign characteristics, and the road roller cannot be operated at the momentEstimating the pose of the user; therefore, the pose of the road roller is tracked to keep the pose estimation continuity, the data of the odometer and the laser radar are fused by the extended Kalman filtering, the increment of the position and the direction of the road roller is calculated from the odometer data and is used as an input quantity u ═ delta S, delta phi)TThe vehicle system state equation is established as follows:
where w is the process white noise with error matrix Q.
In summary, the invention sets markers around the construction area, the markers are traffic cones, and are used as marker points in the fusion positioning assistance of the laser point cloud data and the image data; identifying a marker and a moving target of a construction area by adopting a machine learning-based method, acquiring construction site image data, marking the marker and the moving target, and training a Faster R-CNN model to identify the marker and the moving target of the construction area under a cafe frame; and fusing the image data, the laser point cloud data and the IMU inertial data acquired by the camera, and solving the position and attitude information of the road roller in the construction area.
By the aid of the machine vision positioning method, visual data and IMU data are fused, the method can adapt to special construction environments of the road roller, the problem of signal loss caused by the fact that GPS positioning is only relied on is effectively solved, the positioning accuracy and speed of the unmanned road roller in the construction environments can meet the positioning and path planning requirements, and the method can be effectively applied to the construction site of the road roller. The invention effectively solves the problem of poor positioning precision of the road roller, reduces the workload of constructors and improves the labor productivity.
It should be noted that: the above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention; while the foregoing is directed to embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.
Claims (7)
1. A full-automatic road roller positioning method based on multi-source data fusion is characterized by comprising the following steps: setting boundary marks on the boundaries of the construction area as mark points in the fusion positioning assistance of laser point cloud data and image data, acquiring images containing the boundary marks by a multi-path synchronous camera, acquiring point cloud data of the construction area by using a laser radar, matching the images with the laser point cloud data, identifying the marks of the construction area, and calculating a construction fence area; the laser point cloud data and IMU output information are fused to calculate the position and posture information of the road roller in the construction area, and the unmanned road roller is positioned in the construction area;
setting a construction area boundary marker on a construction site, mounting an industrial personal computer and an inertial navigation unit on a road roller, mounting cameras for acquiring image information of a construction area of the road roller around the road roller, and setting a three-dimensional laser radar on the front side of the road roller;
cameras are distributed on the front side, the rear side, the left side and the right side of the road roller, the cameras are matched with wide-angle lenses to cover the range of 360 degrees around the road roller, the cameras are calibrated by utilizing checkerboards, camera internal parameters are obtained, and distortion correction is carried out on the cameras;
the three-dimensional laser radar scans a target with a radius of 100 meters in a vertical direction of 30 degrees and a horizontal direction of 360 degrees, and marks of the motion track of the road roller are obtained in a continuous point cloud data matching mode;
the construction area boundary marker is a traffic cone, and effective identification and distance measurement of the traffic cone are obtained by using a method of matching images with point cloud data, so that the construction area boundary is surveyed;
the inertial navigation unit acquires information of speed and position of the road roller, and fuses laser radar information and inertial navigation information by using an extended Kalman filter to realize synchronous composition and positioning of the unmanned road roller;
acquiring three-dimensional point cloud data of a scene by using a three-dimensional laser radar, realizing three-dimensional SLAM composition by using a LOAM method, and calculating coordinate transformation after extracting feature point matching in the LOAM method; firstly, point cloud and IMU data are preprocessed for extracting feature points: the points of one scan are classified by curvature values, and the formula is as follows:
wherein, the { L } is a three-dimensional coordinate system of the laser radar and originates from the geometric center of the laser radar, the x axis points to the left, the y axis points to the upper, and the z axis points to the front; p is a radical ofkRepresenting the perceived point cloud during scan k, { LkThe midpoint i, i ∈ pkIs expressed asS is a group of continuous points returned by the laser scanner in the same scanning process;
sorting the points in the scanning according to the C value and selecting the characteristic points, wherein the maximum value point of the C is selected as an edge point, and the minimum value point of the C is selected as a plane point; dividing one-time scanning into 4 independent sub-areas to enable feature points to be uniformly distributed in an environment, providing 2 edge points and 4 plane points at most for each sub-area, then carrying out registration between two adjacent frames of point cloud data, namely completing correlation of point cloud data at the time t and the time t +1, and estimating relative motion relation of a radar; the point cloud registration process comprises the following steps: for the characteristic line, finding out a nearest point j of the point i in the point cloud at the time t by using a KD tree, and finding out a next nearest point l around the j, so that (j, l) is called as the correspondence of the point i in the point cloud at the time t; for the feature surface, similar to the feature line, first find the closest point j, find l around j, find m around j, and call (j, l, m) the correspondence of point i in the point cloud at time t;
after the registration points are found, obtaining the constraint relation between point clouds at different moments, calculating the distance from the feature points to the corresponding relation, starting from the edge points, and if (j, l) is a corresponding edge line, calculating the distance from the point to the line as follows:
whereinIs the coordinate of point i in L,is the coordinate of the corresponding point j, l of i in the previous moment; if (j, l, m) is the corresponding plane, then the point-to-plane distance is calculated as:
for the parameter to be estimated in the above formulaObtaining a Jaccobian matrix by calculating partial derivatives, and performing motion estimation solving by using an L-M algorithm:
wherein the content of the first and second substances,including lidar at 6 degrees of freedomThe rigid movement in degree can be realized,tx,tyand tzAlong the x, y and z axes of the coordinate system { L }, θ, respectivelyx,θyAnd thetazThe rotation angle follows the right-hand rule; each row of f corresponds to a feature point, d includes the corresponding distance, specifically pointing to the line and point to the surface, J isRegarding the Jaccobian matrix for the f-function,then, by nonlinear iteration, d approaches zero, which yields:
λ represents the nonlinear iteration rate.
2. The full-automatic road roller positioning method based on multi-source data fusion according to claim 1, characterized in that: the traffic cone is a triangular cone and is used as a boundary marker of a construction area, three-dimensional laser point clouds and images of the triangular cone under two viewpoints are respectively collected through a three-dimensional laser radar and a camera, and point cloud semantics of the triangular cone marker are obtained by utilizing a calibration relation between established laser point cloud data and image data and are used as a vehicle positioning reference basis.
3. The full-automatic road roller positioning method based on multi-source data fusion according to claim 1, characterized in that: obtaining original data by using a vehicle-mounted three-dimensional laser radar, and obtaining a quasi road sign set through data filtering and data clustering; then, the iterative closest point algorithm is adopted to search the corresponding relation between the road signs and the road signs in the map, and the obtained corresponding relation is calculatedAnd calculating the pose of the laser radar by using the offset, wherein the pose of the current laser radar in the global coordinate system is assumed to be represented as a state variable (x)L,yL,φL) The first two terms are the position of the laser radar in the global coordinate system, and the third term is phiLIndicating the advancing direction of the laser radar;
wherein (x)k,yk,φk) Is a system state variable, (x)L,k,yL,k,φL,k) For system observation variables, T is the position offset, R is the angle offset, and v is the observation noise with the error matrix R.
4. The full-automatic road roller positioning method based on multi-source data fusion according to claim 3, characterized in that: in the running process of the road roller, due to the limitation of the scanning range of the laser radar, the visual field of the road roller may not have obvious road sign characteristics, and the pose estimation of the road roller cannot be carried out at the moment; therefore, the pose of the road roller is tracked to keep the pose estimation continuity, the data of the odometer and the laser radar are fused by the extended Kalman filtering, the increment of the position and the direction of the road roller is calculated from the odometer data and is used as an input quantity u ═ delta S, delta phi)TThe vehicle system state equation is established as follows:
where w is the process white noise with error matrix Q.
5. The full-automatic road roller positioning method based on multi-source data fusion according to claim 1, characterized in that: the camera is a large target surface industrial camera.
6. The full-automatic road roller positioning method based on multi-source data fusion according to claim 1, characterized in that: the three-dimensional laser radar is arranged on a rack above a steel wheel on the front side of the road roller.
7. The full-automatic road roller positioning method based on multi-source data fusion according to claim 1, characterized in that: the three-dimensional laser radar is installed at an inclined angle.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810670215.2A CN109099901B (en) | 2018-06-26 | 2018-06-26 | Full-automatic road roller positioning method based on multi-source data fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810670215.2A CN109099901B (en) | 2018-06-26 | 2018-06-26 | Full-automatic road roller positioning method based on multi-source data fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109099901A CN109099901A (en) | 2018-12-28 |
CN109099901B true CN109099901B (en) | 2021-09-24 |
Family
ID=64845012
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810670215.2A Active CN109099901B (en) | 2018-06-26 | 2018-06-26 | Full-automatic road roller positioning method based on multi-source data fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109099901B (en) |
Families Citing this family (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11594011B2 (en) * | 2019-01-30 | 2023-02-28 | Baidu Usa Llc | Deep learning-based feature extraction for LiDAR localization of autonomous driving vehicles |
CN110147095A (en) * | 2019-03-15 | 2019-08-20 | 广东工业大学 | Robot method for relocating based on mark information and Fusion |
CN110082739B (en) * | 2019-03-20 | 2022-04-12 | 深圳市速腾聚创科技有限公司 | Data synchronization method and device |
CN110244322B (en) * | 2019-06-28 | 2023-04-18 | 东南大学 | Multi-source sensor-based environmental perception system and method for pavement construction robot |
CN110568447B (en) * | 2019-07-29 | 2022-03-08 | 广东星舆科技有限公司 | Visual positioning method, device and computer readable medium |
CN110749327B (en) * | 2019-08-08 | 2023-06-09 | 南京航空航天大学 | Vehicle navigation method in cooperative environment |
CN110389590B (en) * | 2019-08-19 | 2022-07-05 | 杭州电子科技大学 | AGV positioning system and method integrating 2D environment map and sparse artificial landmark |
WO2021031158A1 (en) * | 2019-08-21 | 2021-02-25 | 深圳市大疆创新科技有限公司 | Positioning system and method for movable object, movable object, and storage medium |
CN110782497B (en) * | 2019-09-06 | 2022-04-29 | 腾讯科技(深圳)有限公司 | Method and device for calibrating external parameters of camera |
CN112540382B (en) * | 2019-09-07 | 2024-02-13 | 山东大学 | Laser navigation AGV auxiliary positioning method based on visual identification detection |
WO2021056190A1 (en) * | 2019-09-24 | 2021-04-01 | Beijing Didi Infinity Technology And Development Co., Ltd. | Semantic-assisted multi-resolution point cloud registration |
CN110849362B (en) * | 2019-11-28 | 2022-01-04 | 湖南率为控制科技有限公司 | Laser radar and vision combined navigation algorithm based on vehicle-mounted inertia |
CN112904395B (en) * | 2019-12-03 | 2022-11-25 | 青岛慧拓智能机器有限公司 | Mining vehicle positioning system and method |
CN111364549B (en) * | 2020-02-28 | 2021-11-09 | 江苏徐工工程机械研究院有限公司 | Synchronous drawing and automatic operation method and system based on laser radar |
US11550058B2 (en) | 2020-04-10 | 2023-01-10 | Caterpillar Paving Products Inc. | Perception system three lidar coverage |
CN111595333B (en) * | 2020-04-26 | 2023-07-28 | 武汉理工大学 | Modularized unmanned vehicle positioning method and system based on visual inertia laser data fusion |
CN111551976A (en) * | 2020-05-20 | 2020-08-18 | 四川万网鑫成信息科技有限公司 | Method for automatically completing abnormal positioning by combining various data |
CN111709355B (en) * | 2020-06-12 | 2023-08-29 | 阿波罗智联(北京)科技有限公司 | Method and device for identifying target area, electronic equipment and road side equipment |
CN111754798A (en) * | 2020-07-02 | 2020-10-09 | 上海电科智能系统股份有限公司 | Method for realizing detection of vehicle and surrounding obstacles by fusing roadside laser radar and video |
CN111929699B (en) * | 2020-07-21 | 2023-05-09 | 北京建筑大学 | Laser radar inertial navigation odometer considering dynamic obstacle and map building method and system |
CN111998832A (en) * | 2020-08-12 | 2020-11-27 | 河北雷神科技有限公司 | Laser point cloud-based inspection method for accurately positioning target object by using unmanned aerial vehicle |
CN111947647B (en) * | 2020-08-26 | 2024-05-17 | 四川阿泰因机器人智能装备有限公司 | Precise positioning method for vision and laser radar integrated robot |
CN112254729A (en) * | 2020-10-09 | 2021-01-22 | 北京理工大学 | Mobile robot positioning method based on multi-sensor fusion |
CN114764853A (en) * | 2020-12-30 | 2022-07-19 | 华为技术有限公司 | Method and device for determining occupation information |
CN113091736B (en) * | 2021-04-02 | 2023-04-07 | 京东科技信息技术有限公司 | Robot positioning method, device, robot and storage medium |
CN113359782B (en) * | 2021-05-28 | 2022-07-29 | 福建工程学院 | Unmanned aerial vehicle autonomous addressing landing method integrating LIDAR point cloud and image data |
WO2022256976A1 (en) * | 2021-06-07 | 2022-12-15 | 深圳市大疆创新科技有限公司 | Method and system for constructing dense point cloud truth value data and electronic device |
CN113777600B (en) * | 2021-09-09 | 2023-09-15 | 北京航空航天大学杭州创新研究院 | Multi-millimeter wave radar co-location tracking method |
CN114663992B (en) * | 2022-03-18 | 2023-06-06 | 福建工程学院 | Multisource data fusion expressway portal positioning method |
CN114429432B (en) * | 2022-04-07 | 2022-06-21 | 科大天工智能装备技术(天津)有限公司 | Multi-source information layered fusion method and device and storage medium |
CN115235478B (en) * | 2022-09-23 | 2023-04-07 | 武汉理工大学 | Intelligent automobile positioning method and system based on visual label and laser SLAM |
CN116299367B (en) * | 2023-05-18 | 2024-01-26 | 中国测绘科学研究院 | Multi-laser space calibration method |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101017080A (en) * | 2002-09-30 | 2007-08-15 | 石川岛播磨重工业株式会社 | Device of measuring object |
CN101469998A (en) * | 2007-12-27 | 2009-07-01 | 爱信艾达株式会社 | Feature information collecting apparatus and feature information collecting program, and own vehicle position recognition apparatus and navigation apparatus |
CN103389103A (en) * | 2013-07-03 | 2013-11-13 | 北京理工大学 | Geographical environmental characteristic map construction and navigation method based on data mining |
CN105210128A (en) * | 2013-04-10 | 2015-12-30 | 谷歌公司 | Mapping active and inactive construction zones for autonomous driving |
CN105512646A (en) * | 2016-01-19 | 2016-04-20 | 腾讯科技(深圳)有限公司 | Data processing method, data processing device and terminal |
CN105719284A (en) * | 2016-01-18 | 2016-06-29 | 腾讯科技(深圳)有限公司 | Data processing method, device and terminal |
CN106127153A (en) * | 2016-06-24 | 2016-11-16 | 南京林业大学 | The traffic sign recognition methods of Vehicle-borne Laser Scanning cloud data |
CN106774335A (en) * | 2017-01-03 | 2017-05-31 | 南京航空航天大学 | Guiding device based on multi-vision visual and inertial navigation, terrestrial reference layout and guidance method |
CN106932780A (en) * | 2017-03-14 | 2017-07-07 | 北京京东尚科信息技术有限公司 | Object positioning method, device and system |
CN107111885A (en) * | 2014-12-30 | 2017-08-29 | 诺基亚技术有限公司 | For the method for the position for determining portable set |
CN107463918A (en) * | 2017-08-17 | 2017-12-12 | 武汉大学 | Lane line extracting method based on laser point cloud and image data fusion |
CN107688184A (en) * | 2017-07-24 | 2018-02-13 | 宗晖(上海)机器人有限公司 | A kind of localization method and system |
CN108196535A (en) * | 2017-12-12 | 2018-06-22 | 清华大学苏州汽车研究院(吴江) | Automated driving system based on enhancing study and Multi-sensor Fusion |
CN108628206A (en) * | 2017-03-17 | 2018-10-09 | 通用汽车环球科技运作有限责任公司 | Road construction detecting system and method |
-
2018
- 2018-06-26 CN CN201810670215.2A patent/CN109099901B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101017080A (en) * | 2002-09-30 | 2007-08-15 | 石川岛播磨重工业株式会社 | Device of measuring object |
CN101469998A (en) * | 2007-12-27 | 2009-07-01 | 爱信艾达株式会社 | Feature information collecting apparatus and feature information collecting program, and own vehicle position recognition apparatus and navigation apparatus |
CN105210128A (en) * | 2013-04-10 | 2015-12-30 | 谷歌公司 | Mapping active and inactive construction zones for autonomous driving |
CN107976200A (en) * | 2013-04-10 | 2018-05-01 | 伟摩有限责任公司 | The method and system of vehicle of the operation with autonomous driving pattern |
CN103389103A (en) * | 2013-07-03 | 2013-11-13 | 北京理工大学 | Geographical environmental characteristic map construction and navigation method based on data mining |
CN107111885A (en) * | 2014-12-30 | 2017-08-29 | 诺基亚技术有限公司 | For the method for the position for determining portable set |
CN105719284A (en) * | 2016-01-18 | 2016-06-29 | 腾讯科技(深圳)有限公司 | Data processing method, device and terminal |
CN105512646A (en) * | 2016-01-19 | 2016-04-20 | 腾讯科技(深圳)有限公司 | Data processing method, data processing device and terminal |
CN106127153A (en) * | 2016-06-24 | 2016-11-16 | 南京林业大学 | The traffic sign recognition methods of Vehicle-borne Laser Scanning cloud data |
CN106774335A (en) * | 2017-01-03 | 2017-05-31 | 南京航空航天大学 | Guiding device based on multi-vision visual and inertial navigation, terrestrial reference layout and guidance method |
CN106932780A (en) * | 2017-03-14 | 2017-07-07 | 北京京东尚科信息技术有限公司 | Object positioning method, device and system |
CN108628206A (en) * | 2017-03-17 | 2018-10-09 | 通用汽车环球科技运作有限责任公司 | Road construction detecting system and method |
CN107688184A (en) * | 2017-07-24 | 2018-02-13 | 宗晖(上海)机器人有限公司 | A kind of localization method and system |
CN107463918A (en) * | 2017-08-17 | 2017-12-12 | 武汉大学 | Lane line extracting method based on laser point cloud and image data fusion |
CN108196535A (en) * | 2017-12-12 | 2018-06-22 | 清华大学苏州汽车研究院(吴江) | Automated driving system based on enhancing study and Multi-sensor Fusion |
Also Published As
Publication number | Publication date |
---|---|
CN109099901A (en) | 2018-12-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109099901B (en) | Full-automatic road roller positioning method based on multi-source data fusion | |
CN109945858B (en) | Multi-sensing fusion positioning method for low-speed parking driving scene | |
US11982540B2 (en) | Infrastructure mapping and layered output | |
CN108955702B (en) | Lane-level map creation system based on three-dimensional laser and GPS inertial navigation system | |
Alonso et al. | Accurate global localization using visual odometry and digital maps on urban environments | |
CN111006655B (en) | Multi-scene autonomous navigation positioning method for airport inspection robot | |
CN113781582B (en) | Synchronous positioning and map creation method based on laser radar and inertial navigation combined calibration | |
CN111652179A (en) | Semantic high-precision map construction and positioning method based on dotted line feature fusion laser | |
CN109583409A (en) | A kind of intelligent vehicle localization method and system towards cognitive map | |
EP3842751B1 (en) | System and method of generating high-definition map based on camera | |
CN112380312B (en) | Laser map updating method based on grid detection, terminal and computer equipment | |
CN112904395B (en) | Mining vehicle positioning system and method | |
JP2012185011A (en) | Mobile position measuring apparatus | |
Marinelli et al. | Mobile mapping systems and spatial data collection strategies assessment in the identification of horizontal alignment of highways | |
CN115564865A (en) | Construction method and system of crowdsourcing high-precision map, electronic equipment and vehicle | |
CN114509065B (en) | Map construction method, system, vehicle terminal, server and storage medium | |
Hara et al. | Vehicle localization based on the detection of line segments from multi-camera images | |
CN112446915A (en) | Picture-establishing method and device based on image group | |
CN113673386A (en) | Method for marking traffic signal lamp in prior-to-check map | |
CN110415299B (en) | Vehicle position estimation method based on set guideboard under motion constraint | |
Alrousan et al. | Autonomous vehicle multi-sensors localization in unstructured environment | |
Deusch et al. | Improving localization in digital maps with grid maps | |
Yi et al. | Geographical map registration and fusion of lidar-aerial orthoimagery in gis | |
CN112530270B (en) | Mapping method and device based on region allocation | |
Jiang et al. | Precise vehicle ego-localization using feature matching of pavement images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20190404 Address after: Room 525, Building 333 Xingpu Road, Suzhou Industrial Park, Jiangsu Province Applicant after: Zhongke Weiyi (Suzhou) Intelligent Technology Co., Ltd. Address before: 215021 Linquan Street 399, Suzhou Industrial Park, Jiangsu Province Applicant before: Suzhou Road Agent Intelligent Technology Co., Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |