CN116399324A - Picture construction method and device, controller and unmanned vehicle - Google Patents

Picture construction method and device, controller and unmanned vehicle Download PDF

Info

Publication number
CN116399324A
CN116399324A CN202310357308.0A CN202310357308A CN116399324A CN 116399324 A CN116399324 A CN 116399324A CN 202310357308 A CN202310357308 A CN 202310357308A CN 116399324 A CN116399324 A CN 116399324A
Authority
CN
China
Prior art keywords
point cloud
pose
unmanned vehicle
imu
laser radar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310357308.0A
Other languages
Chinese (zh)
Inventor
刘勇让
刘明
王鲁佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yiqing Innovation Technology Co ltd
Qingshuiwan Shenzhen Automatic Driving Intelligence Research Center LP
Original Assignee
Shenzhen Yiqing Innovation Technology Co ltd
Qingshuiwan Shenzhen Automatic Driving Intelligence Research Center LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yiqing Innovation Technology Co ltd, Qingshuiwan Shenzhen Automatic Driving Intelligence Research Center LP filed Critical Shenzhen Yiqing Innovation Technology Co ltd
Priority to CN202310357308.0A priority Critical patent/CN116399324A/en
Publication of CN116399324A publication Critical patent/CN116399324A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/3815Road data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The application relates to a graph construction method, a graph construction device, a controller and an unmanned vehicle, wherein the method comprises the following steps: when a vehicle runs, acquiring original point cloud sampled by a laser radar, pose data acquired by an IMU and position data measured by a GPS; removing distortion of the original point cloud according to the pose data; obtaining pose estimation of the laser radar based on de-distorted point cloud matching; integrating IMU between pose estimation to obtain the first relative pose of two adjacent frames; optimizing the position data, the pose estimation and the first relative pose input factor graph to obtain an initial optimized point cloud map; carrying out loop detection on the current state of the vehicle by using the initial optimized point cloud map and a preset calibration plate; obtaining a second relative pose according to the detection result; and (5) inputting the second relative pose into the factor graph for optimization, and obtaining a final optimized point cloud map. By placing the calibration plate with strong characteristics as characteristic matching of loop detection, loop detection is facilitated, a high-precision point cloud map is established, and better path planning of a vehicle is facilitated.

Description

Picture construction method and device, controller and unmanned vehicle
Technical Field
The application relates to the technical field of computer vision, in particular to a graph building method and device, a controller and an unmanned vehicle.
Background
Along with the development of computer technology, the application of the high-precision point cloud map is wider and wider, and the precision requirement of the point cloud map is higher and higher. Because the unmanned vehicle runs fast and the surrounding environment is complex, the existing graph building precision is not high. Meanwhile, as the driving distance is too far, the accumulated error is large, and the final point cloud map is excessively shifted, so that the unmanned positioning and sensing process based on the point cloud map is not accurate enough, and unsafe hidden danger is easily caused.
Disclosure of Invention
The technical problem that this embodiment of the application mainly solves is how to optimize current high accuracy map scheme to improve unmanned vehicles's security.
In a first aspect, an embodiment of the present invention provides a mapping method applied to an unmanned vehicle, where a laser radar, an IMU and a GPS are disposed on the unmanned vehicle, the method including: when the unmanned vehicle runs, acquiring original point cloud sampled by the laser radar, pose data acquired by the IMU and position data measured by the GPS; performing de-distortion processing on the original point cloud according to the pose data to obtain de-distorted point cloud; matching based on the undistorted point cloud to obtain pose estimation of the laser radar; integrating the motion information of the IMU between pose estimation of the laser radar to obtain a first relative pose between two adjacent frames; performing optimization operation on the GPS measured position data, the laser radar pose estimation and the first relative pose input factor graph to obtain an initial optimized point cloud map; performing loop detection on the current state of the unmanned vehicle based on the initial optimized point cloud map and a preset calibration plate; obtaining a second relative pose between two adjacent frames according to the loop detection result; and inputting the second relative pose into the factor graph to perform optimization operation, and obtaining a final optimized point cloud map.
In some embodiments, the performing the de-distortion processing on the original point cloud according to the pose data to obtain a de-distorted point cloud includes: determining a datum point and a distortion point according to the original point cloud; performing motion compensation on points in the original point cloud by adopting the pose data to obtain interpolation pose data of each point; calculating the spatial pose relation between the distortion point and the datum point according to interpolation pose data between the time stamp of the distortion point and the time stamp of the datum point; and converting the distortion point into a coordinate system of the datum point according to the space pose relationship so as to obtain a de-distorted point cloud.
In some embodiments, the optimizing the GPS measured position data, the pose estimation of the lidar, and the first relative pose input factor graph to obtain an initial optimized point cloud map includes: acquiring a laser radar point cloud of a current radar sampling time and all IMU data of each IMU sampling time from the last radar sampling time to the current radar sampling time based on pose estimation of the laser radar; calculating a current state predicted value of the unmanned vehicle at the current radar sampling time according to the IMU data and the state of the last radar sampling time; acquiring position data of the GPS measurement corresponding to the current radar sampling time; correcting the current state predicted value according to the position data to obtain a current state estimated value; and carrying out back propagation on all IMU data according to the current state estimated value and the first relative pose so as to correct pose estimation of the laser radar and obtain an initial optimized point cloud map.
In some embodiments, when a preset number of calibration boards are placed on a preset road section where the unmanned vehicle runs, the loop detection is performed on the current state of the unmanned vehicle based on the initial optimized point cloud map and the preset calibration boards, including: when the unmanned vehicle runs to the preset road section, acquiring the current index of the unmanned vehicle according to the preset number of calibration plates; acquiring an index recorded when the unmanned vehicle is in the current state and is driven to the preset road section last time; comparing the current index with the index recorded in the last time, and determining the current state of the unmanned vehicle as closed-loop running when the content of the indexes recorded in the two times is the same or the error is within a preset range; otherwise, determining that the current state of the unmanned vehicle is non-closed-loop running.
In some embodiments, when the unmanned vehicle travels to the preset road section, the obtaining the current index of the unmanned vehicle according to the preset number of calibration boards includes: when the unmanned vehicle runs to the preset road section, receiving a detection range of the calibration plate and a detection frequency instruction; performing filtering processing on the point cloud in the initial optimized point cloud map according to the detection range of the calibration plate and the detection frequency instruction, and obtaining the processed point cloud in the detection range of the calibration plate; extracting point clouds with intensities greater than a preset threshold according to the intensity information of the point clouds; performing Hough straight line detection on the extracted point cloud, and selecting three straight lines meeting preset conditions; and respectively selecting a point from the three straight lines, calculating whether the conditions formed by the triangles are met, and if so, recording the index of the current radar odometer of the unmanned vehicle.
In some embodiments, the obtaining the second relative pose between two adjacent frames according to the result of the loop detection specifically includes: and when the loop detection result is that the current state of the unmanned vehicle is determined to be closed-loop running, recording a second relative pose between two adjacent frames of point clouds.
In a second aspect, an embodiment of the present application provides a mapping apparatus, which is applied to an unmanned vehicle, and the unmanned vehicle is provided with a laser radar, an IMU and a GPS, and the apparatus includes: the parameter acquisition module is used for acquiring the original point cloud sampled by the laser radar, the pose data acquired by the IMU and the position data measured by the GPS when the unmanned vehicle runs; the distortion processing module is used for carrying out distortion removal processing on the original point cloud according to the pose data so as to obtain a distortion-removed point cloud; the pose estimation acquisition module is used for carrying out matching based on the undistorted point cloud to obtain pose estimation of the laser radar; the first relative pose acquisition module is used for integrating the motion information of the IMU between pose estimation of the laser radar so as to obtain a first relative pose between two adjacent frames; the initial optimization point cloud map acquisition module is used for carrying out optimization operation on the GPS measured position data, the laser radar pose estimation and the first relative pose input factor graph to obtain an initial optimization point cloud map; the loop detection module is used for carrying out loop detection on the current state of the unmanned vehicle based on the first target point cloud and a preset calibration plate; the second relative pose acquisition module is used for acquiring a second relative pose between two adjacent frames according to the loop detection result; and the final optimized point cloud map acquisition module is used for inputting the second relative pose into the factor graph to perform optimization operation, so as to obtain a final optimized point cloud map.
In a third aspect, embodiments of the present application provide a controller, including: at least one processor; a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the mapping method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide an unmanned vehicle, comprising: lidar, IMU, GPS and controller as described in the third aspect.
In a fourth aspect, embodiments of the present application provide a non-volatile computer-readable storage medium storing computer-executable instructions that, when executed by a controller, cause the controller to perform the mapping method as described in the first aspect.
The beneficial effects of the embodiment of the application are that: different from the situation of the related art, the image construction method, the image construction device, the controller and the unmanned vehicle provided by the application are applied to the unmanned vehicle, and the unmanned vehicle is provided with a laser radar, an IMU and a GPS, and the image construction method comprises the following steps: when the unmanned vehicle runs, acquiring original point cloud sampled by the laser radar, pose data acquired by the IMU and position data measured by the GPS; performing de-distortion processing on the original point cloud according to the pose data to obtain de-distorted point cloud; matching based on the undistorted point cloud to obtain pose estimation of the laser radar; integrating the motion information of the IMU between pose estimation of the laser radar to obtain a first relative pose between two adjacent frames; performing optimization operation on the GPS measured position data, the laser radar pose estimation and the first relative pose input factor graph to obtain an initial optimized point cloud map; performing loop detection on the current state of the unmanned vehicle based on the initial optimized point cloud map and a preset calibration plate; obtaining a second relative pose between two adjacent frames according to the loop detection result; and inputting the second relative pose into the factor graph to perform optimization operation, and obtaining a final optimized point cloud map. The method and the device solve the problem that in the prior art, unmanned positioning and sensing processes based on the point cloud map are not accurate enough, loop detection is easy to be wrong, and the success rate of loop detection is facilitated by placing a calibration plate with strong characteristics as feature matching of loop detection, so that errors in the map building process are remarkably reduced, high-precision point cloud maps are facilitated to be built, and better positioning and path planning of unmanned vehicles are facilitated.
Drawings
One or more embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements, and in which the figures of the drawings are not to scale, unless expressly stated otherwise.
FIG. 1 is a schematic flow chart of a mapping method according to an embodiment of the present disclosure;
FIG. 2 is a schematic view of a calibration plate placement scenario provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of adjusting parameters of a calibration plate according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a calibration plate detection result according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a mapping device according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a controller according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an unmanned vehicle according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
It should be noted that, if not conflicting, the various features in the embodiments of the present application may be combined with each other, which are all within the protection scope of the present application. In addition, while the division of functional blocks is performed in a device diagram and the logic sequence is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the block division in a device diagram or the sequence in a flowchart.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
Referring to fig. 1, fig. 1 is a flow chart of a mapping method provided in an embodiment of the present application, where the mapping method is applied to an unmanned vehicle, and the unmanned vehicle is provided with a laser radar, an IMU and a GPS, and the method includes the following steps:
step S1: and when the unmanned vehicle runs, acquiring the original point cloud sampled by the laser radar, the pose data acquired by the IMU and the position data measured by the GPS.
The original point cloud of laser radar sampling refers to a data set of space points scanned by three-dimensional laser radar equipment, each point cloud contains three-dimensional coordinate information, and color information, reflection intensity information, echo frequency information and the like.
The IMU is called inertial navigation system, and can measure angular motion parameter and linear motion parameter of carrier relative to inertial space by using inertial measurement element (gyroscope and accelerometer), and can obtain carrier speed, position, gesture and course by means of navigation solution under the given initial condition of motion.
GPS is known as the Global positioning System, which is a high-precision radio navigation positioning system based on artificial earth satellites, and can provide accurate geographic position, vehicle speed and accurate time information in any place around the globe and in near-earth space.
According to the embodiment of the application, the laser radar, the IMU and the GPS are installed on the unmanned vehicle, so that three-dimensional laser radar point clouds of the external environment of the vehicle, pose data acquired by the IMU and GPS measured position data are acquired in real time. It should be noted that, the lidar and the GPS respectively provide lidar point cloud data and position data at a lower frequency, the IMU provides pose data at a higher frequency, the time when the lidar collects the point cloud data is denoted as radar sampling time, the position data collected by the GPS is denoted as GPS sampling time, the time when the IMU collects the pose data is denoted as IMU sampling time, and there are multiple IMU sampling times between the two radar sampling times, and there are multiple IMU sampling times between the two GPS sampling times.
Step S2: and performing de-distortion processing on the original point cloud according to the pose data to obtain de-distorted point cloud.
Partial noise points in the original point cloud sampled by the laser radar are filtered, and as the laser radar rotates at a high speed to collect data, the vehicle moves, so that the collected point cloud has motion distortion, and the original point cloud needs to be subjected to distortion removal. The pose data of the IMU can provide a conversion relation for the de-distortion of the point cloud, and the radar point at the current moment is converted to a specific radar pose so as to eliminate the motion distortion of the point cloud.
In some embodiments, the performing the de-distortion processing on the original point cloud according to the pose data to obtain a de-distorted point cloud includes: determining a datum point and a distortion point according to the original point cloud, performing motion compensation on points in the original point cloud by adopting the pose data to obtain interpolation pose data of each point, calculating a spatial pose relation between the distortion point and the datum point according to the interpolation pose data between a timestamp of the distortion point and a timestamp of the datum point, and converting the distortion point to a coordinate system of the datum point according to the spatial pose relation to obtain a de-distorted point cloud.
The reference point refers to a point cloud that can be calibrated by using one type of point cloud as a reference or basis of other point clouds, and is also called a reference point.
The distorted point refers to a point cloud with motion distortion in the acquisition process, and points except for a reference point are distorted points in the application.
The time stamp refers to the time of day of the recording when the lidar acquired each point cloud.
The data collected by the high-frequency IMU can provide a conversion relation for the de-distortion of the original point cloud, and the distortion point can be converted to the coordinate of the datum point through the conversion relation so as to obtain the de-distorted point cloud. For example, determining a conversion relation through quaternion, recording a first time stamp acquired by a first point cloud and a second time stamp acquired by a last point cloud in a radar frame, recording all IMU data in a time period from the first time stamp to the second time stamp, carrying out spherical difference on the pose of each point in the original point cloud according to the time stamp by utilizing quaternion pose data provided by the IMU to obtain difference pose data of each point, calculating the space standard relation between the distortion point and the reference point according to the time stamp of the distortion point and the interpolation pose data of the time stamp of the reference point, and converting the distortion point in the original point cloud to the coordinate system of the reference point according to the space position relation, so that the distortion point formed by the movement of the point cloud due to the movement of the laser radar can be eliminated.
Step S3: and matching based on the undistorted point cloud to obtain pose estimation of the laser radar.
The pose estimation of the laser radar is the relative pose of radar interframe matching obtained through matching algorithm processing. The matching algorithm can utilize ICP (Iterative Closest Point, iterative nearest point algorithm) and NDT (Normal Distributed Transform, normal distribution transformation algorithm), but the above algorithms do not have real-time performance in matching, so in the application, the characteristics of the ground in the running process of the vehicle can be fully utilized to divide the displayed ground in the point cloud, the curvature information of each point is solved, the undistorted point cloud is used for extracting the line characteristics and the surface characteristics of the point cloud according to the curvature, the large curvature is used as the line characteristics, the small curvature is used as the surface characteristics, in order to avoid the intensive characteristics of one place, the point cloud data of one week can be divided into a plurality of parts, the characteristics of each area are extracted according to the size of the curvature in each area, for example, the selected angle can be a corner, and the plane point can be a wall surface.
Step S4: integrating the motion information of the IMU between pose estimates of the lidar to obtain a first relative pose between two adjacent frames.
Specifically, the IMU between adjacent radar frames can record, pre-integrate the output data of the six-axis IMU, further pre-integrate the angular velocity of the output of the IMU, calculate euler angle transformation between adjacent frames, convert the euler angle transformation into a quaternion for the convenience of interpolation and the prevention of the problem of universal lock of the euler angle, and take the quaternion as a first relative position matched between adjacent radar frames.
Step S5: and performing optimization operation on the GPS measured position data, the laser radar pose estimation and the first relative pose input factor graph to obtain an initial optimized point cloud map.
G2O (General Graphic Optimization, graph optimization) is a c++ framework based on graph optimization, mainly used to optimize nonlinear error functions.
The factor graph is a representation of factoring a function, and generally includes two types of vertices, a variable vertex and a function vertex, and an edge represents a functional relationship between the vertices.
In the method, the pose of the radar is used as the vertex of a factor graph in the G2O optimization library, and the pose estimation of the laser radar and the first relative pose are used as the binary edges of the factor graph in the G2O optimization library. Because each matching may have a small error, and each matching is performed based on the previous matching, the accumulated error becomes larger due to longer vehicle driving distance, and the offset of the output point cloud map is excessively large, so that the position data measured by the GPS can be imported into the factor graph as a unitary edge of the pose vertex of the radar in the factor graph for optimization, so as to obtain a more accurate point cloud map.
The GPS measured position data can be converted into a local northeast coordinate system by directly acquiring GPS output data and converting the GPS measured longitude, latitude and altitude into a local northeast coordinate system so as to obtain the position data of the vehicle at the moment.
In some embodiments, based on pose estimation of the lidar, a lidar point cloud at a current radar sampling time is obtained, all IMU data at each IMU sampling time from a previous radar sampling time to the current radar sampling time are obtained, a current state prediction value of the unmanned vehicle at the current radar sampling time is calculated according to all the IMU data and a state of the previous radar sampling time, position data of the GPS measurement corresponding to the current radar sampling time is obtained, the current state prediction value is corrected according to the position data, a current state estimation value is obtained, and all the IMU data are counter-propagated according to the current state estimation value and the first relative pose to correct pose estimation of the lidar, so that an initial optimized point cloud map is obtained.
Step S6: and carrying out loop detection on the current state of the unmanned vehicle based on the initial optimized point cloud map and a preset calibration plate.
The calibration plate is an object which is independently designed and has identifiable strong characteristic marks.
Loop detection, also referred to as closed loop detection, refers to the ability of an unmanned vehicle to recognize that a scene was reached such that a map closed loop. The loop detection is successful, the accumulated error can be obviously reduced, and the unmanned vehicle can more accurately perform the works of positioning, sensing and the like; the error of loop detection can not be eliminated, but the point cloud map can be made worse, and even the whole system is crashed.
It will be appreciated that loop detection requires a determination of whether the vehicle has arrived at a location that has been previously reached, and that it requires knowledge of whether the vehicle has returned to the same location that was previously traveled at different times.
In the above process, the vehicle encounters many dynamic obstacles during driving, such as a crossroad with traffic lights, a large truck road and other complex environments, or a road with a single scene, such as a district or a factory building with the same construction, and when searching for the surrounding edge through the common Euclidean distance based on the odometer or the Euclidean distance of the GPS, the first identified feature and the second identified feature can change greatly, or the identified same feature is too many, so that the closed loop cannot be detected or mismatching is obtained. In order to avoid the occurrence of the above situation, preset calibration plates are placed at the places such as intersections and crossroads which need to pass through for many times, and the detection range and the detection frequency are dynamically adjusted according to the situations of the calibration plates placed at all places on the spot, so that the detection speed and the detection accuracy are effectively improved. The number of the calibration plates can be one or more, and the number can be specific according to the actual situation.
Referring to fig. 2, fig. 2 is a schematic view of a scenario in which a calibration plate is placed, as shown in fig. 2, in the running process of the unmanned vehicle 10, an in-frame calibration plate 20, a calibration plate 30 in the frame and a calibration plate 40 in the frame are arranged in the same place, and by setting these calibration plates, the detection range and the detection frequency are dynamically adjusted, so as to effectively improve the speed of loop detection and the accuracy of detection.
In some embodiments, when a preset number of calibration boards are placed on a preset road section where the unmanned vehicle runs, the loop detection is performed on the current state of the unmanned vehicle based on the initial optimized point cloud map and the preset calibration boards, including: when the unmanned vehicle runs to the preset road section, acquiring current indexes of the unmanned vehicle according to the preset number of calibration plates, acquiring indexes recorded when the unmanned vehicle runs to the preset road section last time in the current state, comparing the current indexes with the indexes recorded last time, and determining that the current state of the unmanned vehicle runs in a closed loop when the content of the indexes recorded twice is the same or the error is in a preset range; otherwise, determining that the current state of the unmanned vehicle is non-closed-loop running.
The index refers to a marking made to a vehicle at a particular location when an unmanned vehicle moves to that location. For example, when an unmanned vehicle passes through a first intersection, the index of the markable vehicle is 01, and when the unmanned vehicle passes through the intersection for the second time, the index of the markable vehicle is 02, and the relevant data information when the unmanned vehicle passes through the intersection can be called through the index.
In some embodiments, when the unmanned vehicle travels to the preset road section, the current index of the unmanned vehicle is obtained according to the preset number of calibration boards, including, when the unmanned vehicle travels to the preset road section, receiving a detection range of the calibration boards and a detection frequency instruction, performing filtering processing on the point cloud in the initial optimized point cloud map according to the detection range of the calibration boards and the detection frequency instruction, obtaining the processed point cloud in the detection range of the calibration boards, extracting the point cloud with the intensity greater than a preset threshold according to the intensity information of the point cloud, performing hough straight line detection on the extracted point cloud, selecting three straight lines meeting preset conditions, selecting one point from the three straight lines respectively, calculating whether the condition formed by a triangle is met, and if so, recording the index of the current radar odometer of the unmanned vehicle.
Referring to fig. 3, fig. 3 is a schematic diagram of parameter adjustment of a calibration board according to the embodiment of the present application, as shown in fig. 3, when an unmanned vehicle travels to a preset road section, a detection range of the calibration board and a detection frequency command are received, the whole parameter adjustment needs to enable a surrounding box a to surround the calibration board in a front-back, left-right, upper-lower direction, the parameter support of the calibration board in the application supports dynamic adjustment, the surrounding box a moves along with a latest radar odometer in the construction process, it needs to be noted that the parameter adjustment of the calibration board can affect the range of the surrounding box a, and then affect the calculation speed.
When the number of the calibration plates is three, through the range of three calibration plates framed in the point cloud map, specifically referring to the bounding box A in FIG. 3, after the range to be detected and the detection frequency are adjusted, detection is not needed, firstly, the point cloud of the current frame in the range of the bounding box A is extracted, because the inherent characteristics of the calibration plates, namely the laser radar points transmitted on the calibration plates, the obtained point cloud intensity is higher than that of other places, the point cloud with high intensity can be obtained, the point cloud required by extraction of the preset threshold intensity is set according to the intensity information of the point cloud, wherein the setting of the preset threshold intensity is required to be greater than that of the ordinary point cloud and is smaller than or equal to that of the point cloud obtained by the calibration plates. And extracting a required point cloud, fitting a plurality of straight lines through Hough straight line detection, selecting one point on each of the three straight lines, calculating whether the conditions for forming the triangle are met, and if so, recording the index of the current radar odometer of the unmanned vehicle.
Specifically, referring to fig. 4, fig. 4 is a schematic diagram of a calibration plate detection result provided in the embodiment of the present application, as shown in fig. 4, wherein the detected calibration plates around the unmanned vehicle 10a are respectively a calibration plate 20a in a frame, a calibration plate 30a in a frame, and a calibration plate 40a in a frame.
Step S7: and obtaining a second relative pose between two adjacent frames according to the loop detection result.
In some embodiments, the obtaining the second relative pose between two adjacent frames according to the result of the loop detection specifically includes: and when the loop detection result is that the current state of the unmanned vehicle is determined to be closed-loop running, recording a second relative pose between two adjacent frames of point clouds.
When the unmanned vehicle moves to the position where the calibration plate is placed, the index of the radar odometer for the first time is recorded, and when the unmanned vehicle passes through the calibration plate for the second time, the index of the radar odometer at the moment is recorded, and the characteristics of two frames of point clouds when the unmanned vehicle passes through the calibration plate for the second time are matched through an ICP algorithm, so that a second relative pose without accumulated errors between the two frames can be obtained.
Step S8: and inputting the second relative pose into the factor graph to perform optimization operation, and obtaining a final optimized point cloud map.
And (3) matching the ICP algorithm to obtain a second relative pose, and importing the second relative pose serving as a binary edge of the radar odometer vertex in the factor graph into the factor graph for optimization to obtain a final optimized point cloud map.
The mapping method is applied to an unmanned vehicle, and the unmanned vehicle is provided with a laser radar, an IMU and a GPS, and the method comprises the following steps: when the unmanned vehicle runs, acquiring original point cloud sampled by the laser radar, pose data acquired by the IMU and position data measured by the GPS; performing de-distortion processing on the original point cloud according to the pose data to obtain de-distorted point cloud; matching based on the undistorted point cloud to obtain pose estimation of the laser radar; integrating the motion information of the IMU between pose estimation of the laser radar to obtain a first relative pose between two adjacent frames; performing optimization operation on the GPS measured position data, the laser radar pose estimation and the first relative pose input factor graph to obtain an initial optimized point cloud map; performing loop detection on the current state of the unmanned vehicle based on the initial optimized point cloud map and a preset calibration plate; obtaining a second relative pose between two adjacent frames according to the loop detection result; and inputting the second relative pose into the factor graph to perform optimization operation, and obtaining a final optimized point cloud map. According to the method and the device, the problem that error is prone to occur in loop detection due to the fact that positioning and sensing of an unmanned point cloud map are not accurate enough in the prior art is solved, loop detection is facilitated by placing a calibration plate with strong characteristics as feature matching of loop detection, errors in the map building process are remarkably reduced, high-precision point cloud maps are facilitated to be built, and unmanned vehicles are helped to perform better positioning and path planning.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a mapping device provided in an embodiment of the present application, which is applied to an unmanned vehicle, and the unmanned vehicle is provided with a laser radar, an IMU and a GPS, the device 100 includes: the system comprises a parameter acquisition module 101, a distortion processing module 102, a pose estimation acquisition module 103, a first relative pose acquisition module 104, an initial optimization point cloud map acquisition module 105, a loop detection module 106, a second relative pose acquisition module 107 and a final optimization point cloud map acquisition module 108.
The parameter obtaining module 101 is configured to obtain, when the unmanned vehicle runs, an original point cloud sampled by the laser radar, pose data collected by the IMU, and position data measured by the GPS.
And the distortion processing module 102 is configured to perform a distortion removal process on the original point cloud according to the pose data, so as to obtain a distortion-removed point cloud.
And the pose estimation obtaining module 103 is used for obtaining the pose estimation of the laser radar based on matching of the undistorted point cloud. A first relative pose acquisition module 104, configured to integrate motion information of the IMU between pose estimates of the lidar to obtain a first relative pose between two adjacent frames. The initial optimization point cloud map obtaining module 105 is configured to perform an optimization operation on the position data measured by the GPS, the pose estimation of the lidar, and the first relative pose input factor graph, so as to obtain an initial optimization point cloud map. And the loop detection module 106 is configured to perform loop detection on the current state of the unmanned vehicle based on the first target point cloud and a preset calibration plate. And the second relative pose acquisition module 107 is configured to obtain a second relative pose between two adjacent frames according to the loop detection result. And the final optimized point cloud map obtaining module 108 is configured to input the second relative pose into the factor graph for performing an optimization operation, and obtain a final optimized point cloud map.
It should be noted that, the mapping device may execute the mapping method provided by the embodiment of the present application, and has the corresponding functional modules and beneficial effects of the execution method. Technical details not described in detail in the embodiment of the mapping apparatus may be referred to the mapping method provided in the embodiment of the present application.
In this embodiment of the present application, the mapping device may also be built by hardware devices, for example, the mapping device may be built by one or more than two chips, where each chip may coordinate with each other to complete the application of the mapping method described in the above embodiments. As another example, the patterning device may be further be formed from various types of logic devices, such as a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a single-chip microcomputer, ARM (Acorn RISC Machine) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof.
The mapping device in the embodiment of the present application may be a device with an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
It should be noted that, in the embodiment of the present application, the content of information interaction between each module and each unit in the mapping device, the executing process, etc. is based on the same concept as the embodiment of the method of the present application, and the specific content is also applicable to the mapping device. Each module in the embodiments of the present application can be implemented as separate hardware or software, and a combination of functions of each unit may be implemented using separate hardware or software as needed.
The present application further provides a controller, referring to fig. 6, fig. 6 is a schematic structural diagram of the controller according to the embodiment of the present application. The controller 200 includes at least one processor 201, and a memory 202 communicatively connected to the at least one processor 201, where the memory 202 stores instructions executable by the at least one processor 201, and the instructions are executed by the at least one processor 201 to enable the at least one processor 201 to perform the mapping method in any of the method embodiments described above. The processor 201 and the memory 202 may be connected by a bus or otherwise, for example in fig. 6.
The processor 201 may be a general purpose processor including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), a hardware chip, or any combination thereof; it may also be a digital signal processor (Digital Signal Processing, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), programmable logic device (programmable logic device, PLD), or a combination thereof. The PLD may be a complex programmable logic device (complex programmable logic device, CPLD), a field-programmable gate array (field-programmable gate array, FPGA), general-purpose array logic (generic array logic, GAL), or any combination thereof.
The memory 202 is used as a non-transitory computer readable storage medium for storing non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the mapping method in the embodiments of the present application. The processor 201 may implement the mapping method in any of the above method embodiments by running non-transitory software programs, instructions and modules stored in the memory 202, i.e. may be capable of implementing the entire process of fig. 1.
The application also provides an unmanned vehicle, please refer to fig. 7, fig. 7 is a schematic structural diagram of the unmanned vehicle provided in the embodiment of the application. The unmanned vehicle 300 includes: lidar 301, IMU302, GPS303, and controller 200 described above.
Embodiments of the present application provide a non-transitory computer readable storage medium storing computer executable instructions, such as a memory including program code executable by a processor to perform the mapping method of the above embodiments. For example, the computer readable storage medium may be Read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), compact disc Read-Only Memory (CDROM), magnetic tape, floppy disk, optical data storage device, etc.
Embodiments of the present application provide a computer program product comprising one or more program codes stored in a computer-readable storage medium. The processor of the intelligent vehicle reads the program code from the computer readable storage medium, and the processor executes the program code to complete the steps of the mapping method provided in the above embodiments.
From the above description of embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus a general purpose hardware platform, or may be implemented by hardware. Those skilled in the art will appreciate that all or part of the processes implementing the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and where the program may include processes implementing the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), or the like.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting thereof; the technical features of the above embodiments or in the different embodiments may also be combined under the idea of the present application, the steps may be implemented in any order, and there are many other variations of the different aspects of the present application as described above, which are not provided in details for the sake of brevity; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. The utility model provides a build a graph method is applied to unmanned vehicle, unmanned vehicle is last to be provided with laser radar, IMU and GPS, its characterized in that, the method includes:
when the unmanned vehicle runs, acquiring original point cloud sampled by the laser radar, pose data acquired by the IMU and position data measured by the GPS;
performing de-distortion processing on the original point cloud according to the pose data to obtain de-distorted point cloud;
matching based on the undistorted point cloud to obtain pose estimation of the laser radar;
integrating the motion information of the IMU between pose estimation of the laser radar to obtain a first relative pose between two adjacent frames;
performing optimization operation on the GPS measured position data, the laser radar pose estimation and the first relative pose input factor graph to obtain an initial optimized point cloud map;
performing loop detection on the current state of the unmanned vehicle based on the initial optimized point cloud map and a preset calibration plate;
obtaining a second relative pose between two adjacent frames according to the loop detection result;
and inputting the second relative pose into the factor graph to perform optimization operation, and obtaining a final optimized point cloud map.
2. The mapping method according to claim 1, wherein the performing the de-distortion processing on the original point cloud according to the pose data to obtain a de-distorted point cloud includes:
determining a datum point and a distortion point according to the original point cloud;
performing motion compensation on points in the original point cloud by adopting the pose data to obtain interpolation pose data of each point;
calculating the spatial pose relation between the distortion point and the datum point according to interpolation pose data between the time stamp of the distortion point and the time stamp of the datum point;
and converting the distortion point into a coordinate system of the datum point according to the space pose relationship so as to obtain a de-distorted point cloud.
3. The mapping method according to claim 1, wherein the optimizing the GPS measured position data, the pose estimation of the lidar, and the first relative pose input factor graph to obtain an initial optimized point cloud map includes:
acquiring a laser radar point cloud of a current radar sampling time and all IMU data of each IMU sampling time from the last radar sampling time to the current radar sampling time based on pose estimation of the laser radar;
Calculating a current state predicted value of the unmanned vehicle at the current radar sampling time according to the IMU data and the state of the last radar sampling time;
acquiring position data of the GPS measurement corresponding to the current radar sampling time;
correcting the current state predicted value according to the position data to obtain a current state estimated value;
and carrying out back propagation on all IMU data according to the current state estimated value and the first relative pose so as to correct pose estimation of the laser radar and obtain an initial optimized point cloud map.
4. A mapping method according to any one of claims 1 to 3, wherein when a preset number of calibration boards are placed on a preset road section where the unmanned vehicle runs, the loop detection on the current state of the unmanned vehicle based on the initial optimized point cloud map and the preset calibration boards includes:
when the unmanned vehicle runs to the preset road section, acquiring the current index of the unmanned vehicle according to the preset number of calibration plates;
acquiring an index recorded when the unmanned vehicle is in the current state and is driven to the preset road section last time;
Comparing the current index with the index recorded in the last time, and determining the current state of the unmanned vehicle as closed-loop running when the content of the indexes recorded in the two times is the same or the error is within a preset range; otherwise, determining that the current state of the unmanned vehicle is non-closed-loop running.
5. The mapping method according to claim 4, wherein the obtaining the current index of the unmanned vehicle according to the preset number of calibration boards when the unmanned vehicle travels to the preset road section includes:
when the unmanned vehicle runs to the preset road section, receiving a detection range of the calibration plate and a detection frequency instruction;
performing filtering processing on the point cloud in the initial optimized point cloud map according to the detection range of the calibration plate and the detection frequency instruction, and obtaining the processed point cloud in the detection range of the calibration plate;
extracting point clouds with intensities greater than a preset threshold according to the intensity information of the point clouds;
performing Hough straight line detection on the extracted point cloud, and selecting three straight lines meeting preset conditions;
and respectively selecting a point from the three straight lines, calculating whether the conditions formed by the triangles are met, and if so, recording the index of the current radar odometer of the unmanned vehicle.
6. The mapping method according to claim 4, wherein the obtaining the second relative pose between two adjacent frames according to the result of the loop detection specifically includes:
and when the loop detection result is that the current state of the unmanned vehicle is determined to be closed-loop running, recording a second relative pose between two adjacent frames of point clouds.
7. A mapping device for an unmanned vehicle, the unmanned vehicle being provided with a lidar, an IMU and a GPS, the device comprising:
the parameter acquisition module is used for acquiring the original point cloud sampled by the laser radar, the pose data acquired by the IMU and the position data measured by the GPS when the unmanned vehicle runs;
the distortion processing module is used for carrying out distortion removal processing on the original point cloud according to the pose data so as to obtain a distortion-removed point cloud;
the pose estimation acquisition module is used for carrying out matching based on the undistorted point cloud to obtain pose estimation of the laser radar;
the first relative pose acquisition module is used for integrating the motion information of the IMU between pose estimation of the laser radar so as to obtain a first relative pose between two adjacent frames;
The initial optimization point cloud map acquisition module is used for carrying out optimization operation on the GPS measured position data, the laser radar pose estimation and the first relative pose input factor graph to obtain an initial optimization point cloud map;
the loop detection module is used for carrying out loop detection on the current state of the unmanned vehicle based on the first target point cloud and a preset calibration plate;
the second relative pose acquisition module is used for acquiring a second relative pose between two adjacent frames according to the loop detection result;
and the final optimized point cloud map acquisition module is used for inputting the second relative pose into the factor graph to perform optimization operation, so as to obtain a final optimized point cloud map.
8. A controller, comprising: at least one processor; a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the mapping method of any one of claims 1-6.
9. An unmanned vehicle, comprising: lidar, IMU, GPS, and the controller of claim 8.
10. A non-transitory computer readable storage medium storing computer executable instructions which, when executed by a controller, cause the controller to perform the mapping method of any of claims 1-6.
CN202310357308.0A 2023-03-24 2023-03-24 Picture construction method and device, controller and unmanned vehicle Pending CN116399324A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310357308.0A CN116399324A (en) 2023-03-24 2023-03-24 Picture construction method and device, controller and unmanned vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310357308.0A CN116399324A (en) 2023-03-24 2023-03-24 Picture construction method and device, controller and unmanned vehicle

Publications (1)

Publication Number Publication Date
CN116399324A true CN116399324A (en) 2023-07-07

Family

ID=87013761

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310357308.0A Pending CN116399324A (en) 2023-03-24 2023-03-24 Picture construction method and device, controller and unmanned vehicle

Country Status (1)

Country Link
CN (1) CN116399324A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117470230A (en) * 2023-10-23 2024-01-30 广州创源机器人有限公司 Visual laser sensor fusion positioning algorithm based on deep learning
CN117671013A (en) * 2024-02-01 2024-03-08 安徽蔚来智驾科技有限公司 Point cloud positioning method, intelligent device and computer readable storage medium
CN117671013B (en) * 2024-02-01 2024-04-26 安徽蔚来智驾科技有限公司 Point cloud positioning method, intelligent device and computer readable storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117470230A (en) * 2023-10-23 2024-01-30 广州创源机器人有限公司 Visual laser sensor fusion positioning algorithm based on deep learning
CN117671013A (en) * 2024-02-01 2024-03-08 安徽蔚来智驾科技有限公司 Point cloud positioning method, intelligent device and computer readable storage medium
CN117671013B (en) * 2024-02-01 2024-04-26 安徽蔚来智驾科技有限公司 Point cloud positioning method, intelligent device and computer readable storage medium

Similar Documents

Publication Publication Date Title
EP3620823B1 (en) Method and device for detecting precision of internal parameter of laser radar
EP3875985B1 (en) Method, apparatus, computing device and computer-readable storage medium for positioning
CN110160542B (en) Method and device for positioning lane line, storage medium and electronic device
CN109410735B (en) Reflection value map construction method and device
CN111123949B (en) Obstacle avoidance method and device for robot, robot and storage medium
WO2022127532A1 (en) Method and apparatus for calibrating external parameter of laser radar and imu, and device
CN104677361B (en) A kind of method of comprehensive location
CN114111775B (en) Multi-sensor fusion positioning method and device, storage medium and electronic equipment
CN110608746B (en) Method and device for determining the position of a motor vehicle
CN113933818A (en) Method, device, storage medium and program product for calibrating laser radar external parameter
EP3624057A1 (en) Subpixel computations for increasing distance resolution at a distant location
CN114279453B (en) Automatic driving vehicle positioning method and device based on vehicle-road cooperation and electronic equipment
CN112835085A (en) Method and device for determining vehicle position
KR20220058901A (en) Data processing methods and devices
CN112362054A (en) Calibration method, calibration device, electronic equipment and storage medium
CN116399324A (en) Picture construction method and device, controller and unmanned vehicle
CN112964291A (en) Sensor calibration method and device, computer storage medium and terminal
CN116106870A (en) Calibration method and device for external parameters of vehicle laser radar
CN114323050A (en) Vehicle positioning method and device and electronic equipment
CN114442133A (en) Unmanned aerial vehicle positioning method, device, equipment and storage medium
CN114485698A (en) Intersection guide line generating method and system
JPWO2020071117A1 (en) Information processing device
CN115546303A (en) Method and device for positioning indoor parking lot, vehicle and storage medium
CN117367419A (en) Robot positioning method, apparatus and computer readable storage medium
EP4336468A1 (en) Parameterization method for point cloud data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination