CN110864696A - Three-dimensional high-precision map drawing method based on vehicle-mounted laser inertial navigation data - Google Patents

Three-dimensional high-precision map drawing method based on vehicle-mounted laser inertial navigation data Download PDF

Info

Publication number
CN110864696A
CN110864696A CN201910884069.8A CN201910884069A CN110864696A CN 110864696 A CN110864696 A CN 110864696A CN 201910884069 A CN201910884069 A CN 201910884069A CN 110864696 A CN110864696 A CN 110864696A
Authority
CN
China
Prior art keywords
lane
vehicle
line
data
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910884069.8A
Other languages
Chinese (zh)
Inventor
罗文婷
李林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Ruidao Engineering Technology Consulting Co Ltd
Fujian Agriculture and Forestry University
Original Assignee
Fujian Ruidao Engineering Technology Consulting Co Ltd
Fujian Agriculture and Forestry University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Ruidao Engineering Technology Consulting Co Ltd, Fujian Agriculture and Forestry University filed Critical Fujian Ruidao Engineering Technology Consulting Co Ltd
Priority to CN201910884069.8A priority Critical patent/CN110864696A/en
Publication of CN110864696A publication Critical patent/CN110864696A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Abstract

The invention relates to a three-dimensional high-precision map drawing method based on vehicle-mounted laser inertial navigation data, which comprises the following steps of: step S1: positioning the transverse offset of the driving track in the driving process of the collection vehicle; step S2: correcting a road plane driving track; step S3: constructing a three-dimensional high-precision map; step S4: and the terminal checks geographic information and road three-dimensional linear data in real time through mobile phone APP software and conducts three-dimensional high-precision map navigation. The invention can realize the automatic acquisition of road geometric parameters at high speed, high efficiency and road network level, reduce the difficulty of data acquisition and the consumption of manpower, eliminate the errors caused by the vibration and the offset of a detected vehicle in the road geometric detection and improve the accuracy of the measurement.

Description

Three-dimensional high-precision map drawing method based on vehicle-mounted laser inertial navigation data
Technical Field
The invention relates to the research fields of road automatic detection technology, automatic driving technology and three-dimensional scene reconstruction technology, in particular to a three-dimensional high-precision map drawing method based on vehicle-mounted laser inertial navigation data.
Background
Automatic driving is a future development trend of intelligent traffic, a three-dimensional high-precision map is constructed on the basis of big data, and map data sharing is realized, so that the automatic driving is an indispensable part. However, the conventional GIS map cannot meet the demand of automatic driving, mainly due to: 1) local updating is difficult, and high-frequency road modification and extension cannot be reflected in time; 2) the map precision is low, and the requirement of automatic driving operation cannot be met; 3) the two-dimensional map cannot three-dimensionally reflect the interactive overlapping relation of multiple layers of lanes of the interchange overpass and the viaduct.
At present, a plurality of scholars and research institutions (including Google, DeepMap, Baidu, Goodpasture maps and the like) at home and abroad carry out deep research on the automatic driving high-precision map. In the researches, vehicle-mounted equipment such as an inertial navigation system, a laser radar and a panoramic camera are mainly used for collecting road information data, and road linear information is extracted through a corresponding algorithm, so that a high-precision map for an automatic driving navigation system is constructed.
In the prior art, the linear element measuring method based on the road landscape image measures the linear element of the road plane by automatically identifying the road marking, converting coordinates and correcting distortion parameters. Although the method has the characteristics of low acquisition cost and no influence of the driving deviation of the acquisition vehicle, the image data acquired by the area array camera can only be used for extracting the plane linear information of the horizontal road, and the existence of the longitudinal slope and the transverse slope can influence the measurement result of the plane linear.
The current relatively general high-precision map drawing is to collect road information through a collection vehicle, and the main principle is to draw a linear track of a road by extracting and collecting a driving track of the vehicle along a lane through an inertial navigation system. However, the collection vehicle inevitably has a phenomenon of lateral deviation relative to the center line of the lane during driving. Therefore, the driving track of the collected vehicle has a certain deviation from the lane center line, and if a high-precision map is drawn by using the driving track of the collected vehicle as the lane center line, the driving track of the collected vehicle has a certain deviation from the real situation.
At present, in the aspect of 3D high-precision map drawing, the map drawing is mostly carried out by using uncorrected data, and certain deviation exists between the uncorrected data and the real road condition, so that traffic accidents can occur during automatic driving. In addition, the recognition of lane edge lines and marking lines and the accurate positioning on a high-precision map are also essential important contents of the high-precision map, and influence the operation and safety of automatic driving.
The inertial navigation system is the most widely adopted vehicle-mounted road alignment measuring equipment at present, and extracts road alignment information mainly by describing and acquiring a running track of a vehicle along a road direction. The data acquisition method has high acquisition efficiency, is convenient and fast to update data, and can acquire three-dimensional linear information such as road transverse and longitudinal gradients and the like besides the linear information of the road plane. The method is mainly characterized in that round/straight dots in a plane line shape are positioned according to an azimuth angle change rule, and the transverse and longitudinal gradients of a road are calibrated according to an inclination angle and a depression elevation angle. In order to eliminate the influence of individual driving habits on the driving track, a method for acquiring road alignment information through the driving tracks of a plurality of vehicles is adopted at present, but if the general driving track deviates, certain errors still exist in the acquired road alignment information.
In summary, the existing road linear geometry automatic acquisition method has certain limitations. The main problem exists in that the straight circle/straight point of the flat curve cannot be automatically detected and identified, so that the measurement efficiency is reduced; the deviation and jolt of the collecting vehicle bring errors to the measuring result, and the errors are not effectively corrected.
Disclosure of Invention
In view of the above, the present invention provides a three-dimensional high-precision map drawing method based on vehicle-mounted laser inertial navigation data, which can achieve high-speed, high-efficiency, and road network-level automatic collection of road geometric parameters, and reduce the difficulty of data collection and the consumption of manpower.
The invention is realized by adopting the following scheme: a three-dimensional high-precision map drawing method based on vehicle-mounted laser inertial navigation data comprises the steps of providing a collecting vehicle, an inertial measurement unit arranged in the collecting vehicle, three-dimensional line scanning laser equipment symmetrically suspended at two ends of the tail of the collecting vehicle and a wheel type distance meter arranged on the rear wheel of the vehicle; the inertial measurement unit, the three-dimensional line scanning laser equipment and the wheel type distance meter are all in communication connection with a terminal in the collection vehicle and are used for transmitting a two-dimensional road laser image collected by the three-dimensional line scanning laser equipment to the terminal in real time, azimuth angle data collected by the inertial measurement unit and wheel shaft running distance data recorded by the wheel type distance meter;
the three-dimensional high-precision map drawing method based on the device and the data comprises the following steps:
step S1: positioning the transverse offset of the driving track in the driving process of the collection vehicle;
step S2: correcting a road plane driving track;
step S3: constructing a three-dimensional high-precision map;
step S4: and the terminal checks geographic information and road three-dimensional linear data in real time through mobile phone APP software and conducts three-dimensional high-precision map navigation.
Further, the step S1 specifically includes the following steps:
step S11: collecting a two-dimensional laser image of a pavement by using three-dimensional line scanning laser equipment arranged on a collection vehicle, and performing image binarization processing by using the Otsu method;
step S12: filling a void part inside a lane edge line in a binary image by using an expansion corrosion geometrical morphology operation;
step S13: recognizing the edge line of the lane based on a Canny algorithm;
step S14: after the identification of the lane edge line in step S13 is completed, the relative position of the lane marking on the two-dimensional laser image is found, thereby locating the lateral offset amount of the vehicle in driving.
Further, the specific content of the image binarization processing by using the tsu method in step S11 is as follows:
performing image binarization processing by using an improved Otsu method, and dividing the image into two parts, namely I1 and I2 when the threshold value is t, wherein I1 comprises pixel points with the gray level in the range of [0, t ], and I2 comprises pixel points with the gray level in the range of [ t +1, L-1 ]; the specific calculation of the threshold is shown as formulas (1) to (3);
Figure BDA0002206751550000041
Figure BDA0002206751550000042
Figure BDA0002206751550000043
in the formula, piThe proportion of the number of pixels with the gray value i to the total number of pixels is percent; w is a0Represents the background ratio,%; w is a1Represents the foreground proportion,%; t represents a segmentation threshold value, and the value range is 0-L-1; l represents the maximum gray-scale value of the image.
Further, the specific content of step S14 is: performing edge detection on lane edge lines by using a Canny algorithm, performing closed-loop traversal on the edges of the lane edge lines based on non-zero pixel values, determining the geometric outline of the edge lines, and eliminating the noise influence of the non-edge lines according to the shape and the size of the geometric outline; the lane edge line marking is rectangular, longitudinal pixel points of the lane edge line outline are fitted, and position information of outer wheels on two sides outside the lane edge line on the image is extracted, so that the coordinate position of the lane edge line is determined; and (3) taking the central line of the acquired two-dimensional laser image as the central line of the driving track of the acquired vehicle, measuring the transverse position of the acquired vehicle on the actual lane by combining the coordinate position of the lane edge line on the image, and calculating the transverse offset displacement of the driving track.
Further, the calculating of the lateral offset displacement of the driving trajectory comprises calculating the lateral offset displacement of the driving trajectory under three conditions that both side edge lines are captured, only one side edge line is captured, and both side edge lines are not captured;
the two side edge lines are all captured when the transverse offset of the collecting vehicle is less than 25cm, and the left and right edge lines are all solid lines, or just capture the solid line part of the virtual edge line, and at the moment, the two side edge lines are all captured; calculating the actual lane width according to the coordinates of the recognized left and right edge lines on the image; then, according to the center line of the left and right edge lines, namely the center line of the lane, and the center line of the image, namely the center line of the driving track of the detected vehicle, measuring the distance difference, wherein the distance difference is the deviation displacement of the driving track;
the single-side edge line is captured as a solid line segment which is a solid line or a dotted line when the deviation of the collection vehicle to one side is more than 25 cm; or when the deviation displacement of the collecting vehicle is less than 25cm, but the edge line of the single side is a virtual line segment of the virtual edge line, only the edge line of the single side is captured at the moment; determining the coordinate of the lane central line on the image according to the measured lane width and the coordinate position of the edge line on the single side of the image; then calculating the offset displacement of the driving track according to the coordinate positions of the lane central line and the driving track central line;
and when the acquired vehicle deviates to one side by more than 25cm and the edge line of the side is an imaginary line segment of the imaginary edge line, the left edge line and the right edge line are not captured on the image, and at this time, in view of the continuity of the driving track, the average value of the driving track deviation displacement of the front image and the rear image is obtained and is defined as the deviation displacement of the driving track on the image.
Further, the specific content of step S2 is: drawing a plane running track of the collection vehicle by using azimuth angle data collected by the inertia measurement unit and wheel axle running distance data recorded by the optical wheel distance measuring instrument; the data acquisition frequency of the inertia measurement unit is controlled by a pulse value sent by a wheel track measuring instrument, and the data sample interval is 10 cm; acquiring the horizontal and vertical coordinate values of the vehicle plane running track through formulas (4) to (6); acquiring a planar linear track of the center line of the lane according to the driving track of the collected vehicle and the offset displacement of the driving track relative to the center line of the lane, wherein the horizontal and vertical coordinate values are calculated by formulas (7) and (8);
xi=xi-1+z×(Di-Dt-1)×cosHi-1(4)
yi=yi-1+z×(Di-Di-1)×sinHi-1(5)
Figure BDA0002206751550000051
x′i=xi-dL×cosHi(7)
y′i=yi+dL×sinHi(8)
in the formula, xi tableThe abscissa of the ith sample data point of the driving track of the detection vehicle is shown, and the unit is m; y isiThe ordinate of the ith sample data point of the running track of the detection vehicle is represented by m; diRepresents the ith sample pulse value data sent by the photoelectric encoder; hiThe azimuth angle of the ith sample data point in the running track of the detection vehicle is represented, and the unit is degree; n represents the total number of samples collected by the inertial measurement unit; m represents the total number of samples collected by the line scanning laser equipment; x'iRepresents the horizontal coordinate of the central line of the lane, and the unit is m; y'iThe longitudinal coordinate of the center line of the lane is represented, and the unit is m; dLThe unit is m, which represents the lateral offset displacement of the detection vehicle.
Further, the step S3 specifically includes the following steps:
step S31: extracting a lane central line plane track: firstly, acquiring the vehicle track inertial navigation data and the line scanning laser point operation data for multiple times by using a data acquisition vehicle; each acquisition takes a calibrated fixed starting point as a coordinate origin to acquire data; after data acquisition is finished, extracting and correcting the plane running track of each batch of the acquisition vehicle according to the azimuth data and the wheel axle running distance; then, the same coordinate original point of each batch of data is taken as a reference, and the plane travel track data of different batches are superposed on the same coordinate system; finally, pruning the driving track layer generated by superposition, and trimming the lane and ramp track information in the non-intercommunicating overpass area range;
step S32: drawing a high-precision map plane base map: firstly, positioning and measuring a plane linear change demarcation point and a circular curve radius of a lane; secondly, positioning the lane edge according to the lane width measured during the identification of the road edge line by taking the lane central line as a reference; the method comprises the following steps that due to curvature change of a curve segment, lane width information acquired by edge line recognition is combined with coordinate information of straight points and straight dots, and calibration of the edge of a curve lane is carried out by the aid of the radius of a circular curve; finally, adding the road marking information and the coordinate position thereof on the high-precision map plane base map according to the road marking information and the coordinate position thereof acquired on the two-dimensional road laser image;
step S33: and constructing a three-dimensional high-precision map.
Further, the step S33 specifically includes the following steps:
step S331: constructing the plane base map microcell cell coordinates: dividing a lane plane into micro-cell cells, and importing height information of the cells after determining the plane coordinates of each micro-cell; wherein, a 10cm × 10cm micro-cell is adopted to convert the plane base map into point cloud data;
step S332: adding lane longitudinal gradient information: firstly, taking the coordinate origin of a plane base map as a starting point, and converting the longitudinal gradient values of all points of the lane central line into height information in the Z-axis direction; secondly, calibrating coordinates of a cell where the lane center line is located, and importing height information of each point of the lane center line into the cell;
step S333: adding lane transverse gradient information: taking a lane transverse section line as a unit, combining a Z-axis coordinate value of a lane center line and a lane transverse gradient, and converting the height information of each cell on the transverse section line; inputting the height information into each cell as a Z-axis coordinate value thereof;
step S334: and generating a three-dimensional high-precision map.
Compared with the prior art, the invention has the following beneficial effects:
the invention can realize the automatic acquisition of road geometric parameters at high speed, high efficiency and road network level, reduce the difficulty of data acquisition and the consumption of manpower, eliminate the errors caused by the vibration and the offset of a detected vehicle in the road geometric detection and improve the accuracy of the measurement.
Drawings
Fig. 1 is a diagram illustrating an appearance of a data collection vehicle and an architecture of a vehicle-mounted device according to an embodiment of the present invention.
FIG. 2 is a diagram of automatic lane edge line recognition according to an embodiment of the present invention, wherein FIG. 2(a) is an original 2D laser image;
FIG. 2(b) shows the result of binarization processing; FIG. 2(c) shows the results of the swelling and corrosion treatment; fig. 2(d) shows the edge line edge identification positioning result.
FIG. 3 is a graph illustrating the calculation of the offset according to the embodiment of the present invention, wherein FIG. 3(a) shows that the left and right edge lines are captured; FIG. 3(b) is with only the right margin line captured; FIG. 3(c) is with only the left margin line captured; in FIG. 3(d), neither of the left and right edge lines is captured.
Fig. 4 is a diagram illustrating the extraction and correction of the driving trajectory and the measurement of the radius of the circular curve according to the embodiment of the present invention.
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular is intended to include the plural unless the context clearly dictates otherwise, and it should be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of features, steps, operations, devices, components, and/or combinations thereof.
As shown in fig. 1, the embodiment provides a three-dimensional high-precision map drawing method based on vehicle-mounted laser inertial navigation data, which includes providing a collection vehicle, an inertial measurement unit installed inside the collection vehicle, three-dimensional line scanning laser devices symmetrically suspended at two ends of a tail portion of the collection vehicle, and a wheel-type distance meter installed at a rear wheel of the vehicle; the inertial measurement unit, the three-dimensional line scanning laser equipment and the wheel type distance meter are all in communication connection with a terminal in the collection vehicle and are used for transmitting a two-dimensional road laser image collected by the three-dimensional line scanning laser equipment to the terminal in real time, azimuth angle data collected by the inertial measurement unit and wheel shaft running distance data recorded by the wheel type distance meter;
the three-dimensional high-precision map drawing method based on the device and the data comprises the following steps:
step S1: positioning the transverse offset of the driving track in the driving process of the collection vehicle;
step S2: correcting a road plane driving track;
step S3: constructing a three-dimensional high-precision map;
step S4: and the terminal checks geographic information and road three-dimensional linear data in real time through mobile phone APP software and conducts three-dimensional high-precision map navigation.
In this embodiment, the step S1 specifically includes the following steps:
step S11: collecting a two-dimensional laser image of a pavement by using three-dimensional line scanning laser equipment arranged on a collection vehicle, and performing image binarization processing by using the Otsu method;
step S12: the lane edge lines generated by the binarization process may be void or discontinuous due to the old and depainting of the lane edge lines (see fig. 2 (b)). In order to fill up the void and discontinuous areas of the edge line, the embodiment adopts a geometric morphology operation algorithm based on expansion and corrosion, and the method performs convolution operation on the binary image by selecting a proper structural element, so that the inner cavity of the edge line is filled, and the discontinuous areas are connected, so that the edge line is smoother and more complete. The practical effects are shown in FIG. 2 (c).
Filling a void part inside a lane edge line in a binary image by using the geometric morphological operation of expansion corrosion;
step S13: recognizing the edge line of the lane based on a Canny algorithm;
step S14: after the identification of the lane edge line in step S13 is completed, the relative position of the lane marking on the two-dimensional laser image is found, thereby locating the lateral offset amount of the vehicle in driving.
In this embodiment, the specific content of the image binarization processing by using the tsu method in step S11 is as follows: in the binarization process, a threshold value is important for recognition of a target object (lane edge line). If the threshold value is too small, the edge line cannot be found out; if the threshold value is too large, noise (non-target object) in the binarized image is increased, thereby increasing the difficulty in edge line identification.
Performing image binarization processing by using an improved Otsu method, and dividing the image into two parts, namely I1 and I2 when the threshold value is t, wherein I1 comprises pixel points with the gray level in the range of [0, t ], and I2 comprises pixel points with the gray level in the range of [ t +1, L-1 ]; the specific calculation of the threshold is shown as formulas (1) to (3); the original image of the lane edge line is shown in fig. 2(a), and the effect of the binarization processing is shown in fig. 2 (b).
Figure BDA0002206751550000091
Figure BDA0002206751550000092
Figure BDA0002206751550000093
In the formula, piThe proportion of the number of pixels with the gray value i to the total number of pixels is percent; w is a0Represents the background ratio,%; w is a1Represents the foreground proportion,%; t represents a segmentation threshold value, and the value range is 0-L-1; l represents the maximum gray-scale value of the image.
In this embodiment, the specific content of step S14 is: performing edge detection on lane edge lines by using a Canny algorithm, performing closed-loop traversal on the edges of the lane edge lines based on non-zero pixel values, determining the geometric outline of the edge lines, and eliminating the noise influence of the non-edge lines according to the shape and the size of the geometric outline; the lane edge line marking is rectangular, longitudinal pixel points of the lane edge line outline are fitted, and position information of outer wheels on two sides outside the lane edge line on the image is extracted, so that the coordinate position of the lane edge line is determined; the coverage of the 2D laser image data collected in this embodiment is 2m × 4 m. The line scanning laser system is symmetrically suspended at two ends of the tail of the collecting vehicle, so that the central line of the collected two-dimensional laser image is the central line of the driving track of the collecting vehicle, the transverse position of the collecting vehicle on the actual lane is measured by combining the coordinate positions of the lane edge lines on the image, and the transverse offset displacement of the driving track is calculated. Due to the vehicle offset and the existence of the edge line virtual line segment, three situations occur in the edge line captured by the image: both side edge lines are captured, only one side edge line is captured, and both side edge lines are not captured.
In this embodiment, the calculating of the lateral shift displacement of the driving trajectory includes calculating the lateral shift displacement of the driving trajectory in three cases where both side edge lines are captured, only one side edge line is captured, and both side edge lines are not captured;
the two side edge lines are all captured when the transverse offset displacement of the collecting vehicle is less than 25cm, and the left edge line and the right edge line are both solid lines, or the solid line part of the virtual edge line is just captured, and the two side edge lines are all captured at the moment; calculating the actual lane width according to the coordinates of the identified left and right edge lines on the image; then, according to the center line of the left and right edge lines, namely the center line of the lane, and the center line of the image, namely the center line of the driving track of the detected vehicle, measuring the distance difference, wherein the distance difference is the deviation displacement of the driving track;
the single-side edge line is captured as a solid line segment which is a solid line or a dotted line when the deviation of the collection vehicle to one side is more than 25 cm; or when the deviation displacement of the collecting vehicle is less than 25cm, but the edge line of the single side is a virtual line segment of the virtual edge line, only the edge line of the single side is captured at the moment; as shown in fig. 3(b) and 3 (c). Determining the coordinates of the lane central line on the image according to the measured lane width and the coordinate position of the edge line on the single side of the image; then calculating the offset displacement of the driving track according to the coordinate positions of the lane central line and the driving track central line;
none of the two side edge lines is captured as the left and right edge lines are not captured on the image when the vehicle is shifted to one side by more than 25cm and the edge line of the side is a virtual line segment of the virtual edge line, as shown in fig. 3 (d). In view of the continuity of the travel path, the average of the travel path offset displacements of the front and rear images is obtained, and is defined as the offset displacement of the travel path on the image.
In this embodiment, the specific content of step S2 is: drawing a plane running track of the collection vehicle by using azimuth angle data collected by the inertia measurement unit and wheel axle running distance data recorded by the optical wheel distance measuring instrument; the data acquisition frequency of the inertia measurement unit is controlled by a pulse value sent by a wheel track measuring instrument, and the data sample interval is 10 cm; acquiring the horizontal and vertical coordinate values of the vehicle plane running track through formulas (4) to (6); acquiring a planar linear track of the center line of the lane according to the acquired running track of the vehicle and the offset displacement of the vehicle relative to the center line of the lane, wherein the horizontal and vertical coordinate values are calculated by formulas (7) and (8); wherein the variable parameter information is shown in figure 4.
xi=xi-1+z×(Di-Di-1)×cosHi-1(4)
yi=yi-1+z×(Di-Di-1)×sinHi-1(5)
Figure BDA0002206751550000111
x′i=xi-dL×cosHi(7)
y′i=yi+dL×sinHi(8)
In the formula, xiThe abscissa represents the ith sample data point of the driving track of the detection vehicle and has the unit of m; y isiThe ordinate of the ith sample data point of the running track of the detection vehicle is represented by m; diRepresents the ith sample pulse value data sent by the photoelectric encoder; hiThe azimuth angle of the ith sample data point in the running track of the detection vehicle is represented, and the unit is degree; n represents inertia measurementThe total number of samples collected by the measurement unit; m represents the total number of samples collected by the line scanning laser equipment; x'iRepresents the horizontal coordinate of the central line of the lane, and the unit is m; y'iThe longitudinal coordinate of the center line of the lane is represented, and the unit is m; dLThe unit is m, which represents the lateral offset displacement of the detection vehicle.
In this embodiment, the step S3 specifically includes the following steps:
step S31: extracting a lane central line plane track: in order to avoid traffic flow peak periods and cover all lanes and ramp information, data acquisition is required for multiple times. Firstly, acquiring the vehicle track inertial navigation data and the line scanning laser point transport data for multiple times by using a data acquisition vehicle; each acquisition takes a calibrated fixed starting point as a coordinate origin point for data acquisition; after data acquisition is finished, firstly, extracting and correcting the plane running track of each batch of the acquisition vehicle according to the azimuth data and the wheel axle running distance; then, with the same coordinate origin of each batch of data as a reference, the plane driving track data of different batches are superposed on the same coordinate system; finally, pruning the superposed generated driving track layers, and trimming lane and ramp track information in the non-intercommunicating overpass area range;
step S32: drawing a high-precision map plane base map: on the basis of extracting the plane track of the center line of the lane, firstly, positioning and measuring the plane linear change demarcation point and the radius of a circular curve of the lane; secondly, positioning the lane edge according to the lane width measured during the recognition of the road edge line by taking the lane central line as a reference; the method comprises the following steps that due to curvature change of a curve segment, lane width information acquired by edge line identification needs to be combined with coordinate information of straight points and straight dots, and calibration of the edge of a curve lane is carried out by the radius of a circular curve; finally, adding the road marking information and the coordinate position thereof on the high-precision map plane base map according to the road marking information and the coordinate position thereof acquired on the two-dimensional road laser image;
step S33: constructing a three-dimensional high-precision map; and drawing an interchange high-precision map with the precision of 10 cm.
In this embodiment, the three-dimensional high-precision map is built by adding Z-axis coordinates of each point on the lane in combination with the information of the transverse and longitudinal gradient of the lane on the basis of a drawn plane base map. The step S33 specifically includes the following steps:
step S331: constructing the plane base map microcell cell coordinates: dividing a lane plane into micro-cell cells, and importing height information of the cells after determining the plane coordinates of each micro-cell; wherein, a 10cm × 10cm micro-cell is adopted to convert the plane base map into point cloud data;
step S332: adding lane longitudinal gradient information: firstly, taking the coordinate origin of a plane base map as a starting point, and converting the longitudinal gradient values of all points of the lane central line into height information in the Z-axis direction; secondly, calibrating coordinates of a cell where the lane center line is located, and importing height information of each point of the lane center line into the cell;
step S333: adding lane transverse gradient information: taking a lane transverse section line as a unit, combining a Z-axis coordinate value of a lane center line and a lane transverse gradient, and converting the height information of each cell on the transverse section line; inputting the height information into each cell as a Z-axis coordinate value thereof;
step S334: and generating a three-dimensional high-precision map.
The three-dimensional high-precision map can be established by defining the height value information of each infinitesimal point of the road plane base map. And meanwhile, road marking and vehicle-to-edge line information collected in the early stage are embedded into the surface layer of the three-dimensional high-precision map. In order to more intuitively display the surrounding environment of the road, the satellite surface image of the map coverage area is downloaded according to the longitude and latitude coordinate information of the map coverage area and is used as the bottom layer image of the three-dimensional high-precision map.
Preferably, in this embodiment, firstly, the expansion corrosion geometric morphology operation algorithm and the improved Canny edge detection algorithm are used to identify and locate the lane edge lines of the road surface two-dimensional laser image, and the lane center line track is extracted by combining the collected vehicle driving track. And then extracting lane plane linear information by a hierarchical clustering model, a linear fitting algorithm and a chord line offset method. And finally, calculating and correcting the road gradient information by using road surface transverse cutting line data acquired by line scanning laser and a bilateral filtering algorithm. According to the embodiment, the three-dimensional high-precision map is constructed through the acquired lane center line plane linear locus, plane linear element information and transverse and longitudinal gradients. Meanwhile, intelligent 3D geographic information systems of a PC end and a mobile phone APP end are developed, and information sharing is achieved.
The transverse deviation of the collection vehicle in the driving process can cause the driving track to deviate from the center line of the lane, and the measurement of the plane line-shaped elements is influenced. In the embodiment, lane edge line recognition and positioning are performed by using road surface 2D laser image data acquired by a line scanning laser system. And calibrating and collecting the offset displacement of the vehicle relative to the center line of the lane in the driving process through the coordinate position of the recognized lane edge line on the image.
Preferably, in this embodiment, the Inertial Measurement Unit (IMU) is an independent sensor (see fig. 1), and is composed of an accelerometer and a gyroscope. The data collected by the instrument is output through an RS232 interface, the bandwidth is 1-200Hz, and the digital output frequency is 1000 Hz. Accelerometers are used to measure linear acceleration in three directions and gyroscopes measure absolute angular velocity in three directions of the vehicle. Analog variables of each sensor of the IMU are converted into digital information by an A/D converter, the digital information is resolved by a CPU, and angular rate and acceleration information of the motion process of the vehicle are output after compensation methods such as temperature compensation, scale factor compensation, gyroscope zero offset and incremental calibration zero offset correction, installation error compensation and the like are processed.
The installation effect of the three-dimensional line scanning laser apparatus is shown in fig. 1. The working principle of the laser device is to determine the path traveled by the light wave by measuring the phase shift generated by the modulated light wave after the time. The data acquisition speed of the three-dimensional laser equipment used in the research can reach 100 km/h; the measurement precision is 1mm in the horizontal direction and 0.3mm in the vertical direction; and (3) outputting: the serial port RS232/RS422 is used for analog output of 4-20 mA, and the analog port and the switch value can be set by software; the LDM43 is provided with Profibus DP and SSI bus interfaces and is easy to be integrated into an industrial field bus; and the synchronous input end is used for synchronously measuring by a plurality of sensors.
The inertia measuring unit is arranged in the vehicle, the three-dimensional laser equipment is arranged at the tail of the vehicle, and the wheel type distance measuring instrument is arranged at the rear wheel of the vehicle. The three devices are simultaneously connected to the in-vehicle computer terminal, and can transmit and store data to the computer terminal in real time. The three devices can synchronously acquire data, the data can be completely matched, the acquisition speed of the data can reach 100km/h, and the acquired data can be used for calculating the linear geometric information of the road.
Preferably, in this embodiment, according to data collected by the detection vehicle, the origin of the map coordinate system is determined by using the circular coordinate positioning longitude and latitude. Because of more data, the embodiment divides the road into a plurality of layers according to the position of the center line of the road, the origin of the coordinate system of each layer is located at the same point, and all the layers are generated into drawing software by layer superposition. In the process of overlaying the layers, the situation of overlaying and overlapping roads can be avoided, the junction of each road is checked, and repeated parts are reduced. After the drawing of the road center line is completed, widening the road center line by combining the road width value obtained by marking line edge detection, and widening the road at the road turning position to make the drawn road width consistent with the actual road width, so as to obtain the road high-precision map plane base map. And guiding the detected lane edge lines and marking line information into the high-precision map plane base map according to the positioning of the lane edge lines and the marking line information. On the basis of the two-dimensional model, the measured road height and the longitudinal slope value are input into corresponding road information, so that the road plane is changed from two dimensions to three dimensions and shows a wavy shape. And finally, inputting the road transverse gradient into the road information to complete the construction of the three-dimensional map.
Preferably, in this embodiment, the terminal includes a PC or a mobile phone, and the PC end system is composed of three subsystems, namely a two-dimensional geographic information subsystem, a three-dimensional map display subsystem, and a three-dimensional linear data subsystem. The geographic information subsystem has the functions of data query and two-dimensional map display; the three-dimensional map display subsystem has the main functions of automatic generation of an interchange three-dimensional map, automatic marking of three-dimensional linear information and a slope changing point; the three-dimensional linear data subsystem has the main function of displaying lane plane linear, slope changing points, circular curve radiuses and transverse and longitudinal gradient information which are obtained through calculation and correction of a series of algorithms.
The mobile phone APP has three functions: 1) checking geographic information; 2) three-dimensional high-precision map navigation; 3) and consulting the road three-dimensional linear data. After logging in the APP, the user firstly enters a geographic information system interface, clicks the position of the corresponding interchange on a two-dimensional map, can enter a three-dimensional map system of the interchange, looks up the lanes and the ramp directions of the interchange on different layers from different angles, clicks a single lane or ramp in the three-dimensional map on the interface, and enters a data look-up interface, and the interface displays the three-dimensional linear information of the clicked ramp or lane, including the transverse and longitudinal gradients, the longitude and latitude coordinates of the variable gradient points and the radius of a circular curve.
Preferably, this embodiment combines on-vehicle inertial measurement unit, three-dimensional laser equipment and wheeled distancer, can automatic identification location flat curve's straight circle, circle straight point, judges the flat curve type, measures the radius length of flat curve, calculates the transverse and longitudinal gradient of road. Based on the data acquisition speed of 100km/h and a fully automatic data processing program, the method can be used for measuring and calculating the linear geometry of the road at the road network level.
Meanwhile, the defect of automatic detection of the vehicle-mounted equipment is fully considered in the embodiment, and the influence of track deviation, vehicle bump and vehicle inclination on the detection result in the data acquisition process of the vehicle is detected. Corresponding algorithms such as an improved K-means clustering method, a linear fitting analysis, a multi-point random measurement method, a cross section correction method and the like are designed in a targeted manner to eliminate errors caused by positioning straight and straight dots by track deviation, vehicle bump and inclination, measurement of radius of a flat curve and calculation of transverse gradient. Through on-site data acquisition and calculation and verification with a true value, the accuracy rate is up to 98%.
Preferably, in the embodiment, the three-dimensional high-precision road map consistent with the actual road is generated by using the steps of layer superposition, weight reduction, width expansion, widening and the like on the data acquired by the detection vehicle. The developed PC end system can automatically generate a three-dimensional high-precision map on the basis of importing original data, and transmits the generated map through a cloud. In addition, this embodiment has still developed cell-phone APP end system, and this system can download three-dimensional high-accuracy map through the high in the clouds to link to each other with the autopilot system, guide autopilot operation, plan autopilot driving route. The embodiment can also be applied to an automatic driving high-precision map navigation system, provides data support for realizing automatic driving and lays a technical foundation for the research of intelligent roads; meanwhile, a set of efficient, safe and accurate measuring method for the road plane linearity and the transverse and longitudinal gradients is provided for road management departments, and the method can be applied to road completion acceptance, daily maintenance and repair and the supplement and perfection of linearity data.
The above description is only a preferred embodiment of the present invention, and all equivalent changes and modifications made in accordance with the claims of the present invention should be covered by the present invention.

Claims (8)

1. A three-dimensional high-precision map drawing method based on vehicle-mounted laser inertial navigation data is characterized by comprising the following steps: providing a collection vehicle, an inertia measurement unit arranged in the collection vehicle, three-dimensional line scanning laser equipment symmetrically suspended at two ends of the tail part of the collection vehicle and a wheel type distance meter arranged on the rear wheel of the vehicle; the inertial measurement unit, the three-dimensional line scanning laser equipment and the wheel type distance meter are all in communication connection with a terminal in the collection vehicle and are used for transmitting a two-dimensional road laser image collected by the three-dimensional line scanning laser equipment to the terminal in real time, azimuth angle data collected by the inertial measurement unit and wheel shaft running distance data recorded by the wheel type distance meter;
the three-dimensional high-precision map drawing method based on the device and the data comprises the following steps:
step S1: positioning the transverse offset of the driving track in the driving process of the collection vehicle;
step S2: correcting a road plane driving track;
step S3: constructing a three-dimensional high-precision map;
step S4: and the terminal checks geographic information and road three-dimensional linear data in real time through mobile phone APP software and conducts three-dimensional high-precision map navigation.
2. The three-dimensional high-precision map drawing method based on the vehicle-mounted laser inertial navigation data according to claim 1, characterized by comprising the following steps: the step S1 specifically includes the following steps:
step S11: collecting a two-dimensional laser image of a pavement by using three-dimensional line scanning laser equipment arranged on a collection vehicle, and performing image binarization processing by using the Otsu method;
step S12: filling a void part inside a lane edge line in a binary image by using the geometric morphological operation of expansion corrosion;
step S13: recognizing the edge line of the lane based on a Canny algorithm;
step S14: after the identification of the lane edge line in step S13 is completed, the relative position of the lane marking on the two-dimensional laser image is found, thereby locating the lateral offset amount of the vehicle in driving.
3. The three-dimensional high-precision map drawing method based on the vehicle-mounted laser inertial navigation data according to claim 2, characterized by comprising the following steps: the specific contents of the image binarization processing by using the tsu method in step S11 are as follows:
performing image binarization processing by using an improved Otsu method, and dividing the image into two parts, namely I1 and I2 when the threshold value is t, wherein I1 comprises pixels with the gray level in the range of [0, t ], and I2 comprises pixels with the gray level in the range of [ t +1, L-1 ]; the specific calculation of the threshold is shown as formulas (1) to (3);
Figure FDA0002206751540000021
Figure FDA0002206751540000022
Figure FDA0002206751540000023
in the formula, piIs ashThe proportion of the number of pixels with the value i in the total number of pixels is percent; w is a0Represents the background ratio,%; w is a1Represents the foreground proportion,%; t represents a segmentation threshold value, and the value range is 0-L-1; l represents the maximum gray-scale value of the image.
4. The three-dimensional high-precision map drawing method based on the vehicle-mounted laser inertial navigation data according to claim 2, characterized by comprising the following steps: the specific content of step S14 is: performing edge detection on lane edge lines by using a Canny algorithm, performing closed-loop traversal on the edges of the lane edge lines based on non-zero pixel values, determining the geometric outline of the edge lines, and eliminating the noise influence of the non-edge lines according to the shape and the size of the geometric outline; the lane edge line marking line is rectangular, longitudinal pixel points of the lane edge line outline are fitted, and position information of outer wheels on two sides outside the lane edge line on the image is extracted, so that the coordinate position of the lane edge line is determined; and (3) taking the central line of the acquired two-dimensional laser image as the central line of the driving track of the acquired vehicle, measuring the transverse position of the acquired vehicle on the actual lane by combining the coordinate position of the lane edge line on the image, and calculating the transverse offset displacement of the driving track.
5. The three-dimensional high-precision map drawing method based on the vehicle-mounted laser inertial navigation data according to claim 4, characterized in that: the calculation of the transverse offset displacement of the running track comprises the calculation of the transverse offset displacement of the running track under three conditions that the edge lines on both sides are captured, only the edge line on one side is captured and the edge lines on both sides are not captured;
the two side edge lines are all captured when the transverse offset displacement of the collecting vehicle is less than 25cm, and the left and right edge lines are all solid lines, or just capture the solid line part of the virtual edge line, and at the moment, the two side edge lines are all captured; calculating the actual lane width according to the coordinates of the identified left and right edge lines on the image; then measuring the distance difference of the center lines of the left and right edge lines, namely the center line of the lane, and the center line of the image, namely the center line of the driving track of the detected vehicle, wherein the distance difference is the deviation displacement of the driving track;
the single-side edge line is captured as a solid line segment of a solid line or a dotted line when the deviation of the collection vehicle to one side is more than 25 cm; or when the deviation displacement of the collecting vehicle is less than 25cm, but the edge line of the single side is a virtual line segment of the virtual edge line, only the edge line of the single side is captured at the moment; determining the coordinate of the lane central line on the image according to the measured lane width and the coordinate position of the edge line on the single side of the image; then calculating the offset displacement of the driving track according to the coordinate positions of the lane central line and the driving track central line;
and when the acquired vehicle deviates to one side by more than 25cm and the edge line of the side is an imaginary line segment of the imaginary edge line, the left edge line and the right edge line are not captured on the image, and at this time, in view of the continuity of the driving track, the average value of the driving track deviation displacement of the front image and the rear image is obtained and is defined as the deviation displacement of the driving track on the image.
6. The three-dimensional high-precision map drawing method based on the vehicle-mounted laser inertial navigation data according to claim 1, characterized by comprising the following steps: the specific content of step S2 is: drawing a plane running track of the collection vehicle by using azimuth angle data collected by the inertia measurement unit and wheel axle running distance data recorded by the optical wheel distance measuring instrument; the data acquisition frequency of the inertia measurement unit is controlled by a pulse value sent by a wheel track measuring instrument, and the data sample interval is 10 cm; acquiring the horizontal and vertical coordinate values of the vehicle plane running track through formulas (4) to (6); acquiring a planar linear track of the center line of the lane according to the acquired running track of the vehicle and the offset displacement of the vehicle relative to the center line of the lane, wherein the horizontal and vertical coordinate values are calculated by formulas (7) and (8);
xi=xi-1+z×(Di-Di-1)×cosHi-1(4)
yi=yi-1+z×(Di-Di-1)×sinHi-1(5)
Figure FDA0002206751540000041
x′i=xi-dL×cosHi(7)
y′i=yi+dL×sinHi(8)
in the formula, xiThe abscissa represents the ith sample data point of the driving track of the detection vehicle and has the unit of m; y isiThe ordinate of the ith sample data point of the running track of the detection vehicle is represented by m; diThe ith sample pulse value data sent by the photoelectric encoder is represented; hiThe azimuth angle of the ith sample data point in the running track of the detection vehicle is represented in degrees; n represents the total number of samples collected by the inertial measurement unit; m represents the total number of samples collected by the line scanning laser equipment; x'iRepresents the horizontal coordinate of the central line of the lane, and the unit is m; y'iThe longitudinal coordinate of the center line of the lane is represented, and the unit is m; dLThe unit is m, which represents the lateral offset displacement of the detection vehicle.
7. The three-dimensional high-precision map drawing method based on the vehicle-mounted laser inertial navigation data according to claim 1, characterized by comprising the following steps: the step S3 specifically includes the following steps:
step S31: extracting a lane central line plane track: firstly, acquiring the vehicle track inertial navigation data and the line scanning laser point transport data for multiple times by using a data acquisition vehicle; each acquisition takes a calibrated fixed starting point as a coordinate origin to acquire data; after data acquisition is finished, extracting and correcting the plane running track of each batch of the acquisition vehicle according to the azimuth data and the wheel axle running distance; then, the same coordinate origin of each batch of data is taken as a reference, and the plane travel track data of different batches are superposed on the same coordinate system; finally, pruning the driving track layer generated by superposition, and trimming the lane and ramp track information in the non-intercommunicating overpass area range;
step S32: drawing a high-precision map plane base map: firstly, positioning and measuring a plane linear change demarcation point and a circular curve radius of a lane; secondly, positioning the lane edge according to the lane width measured during the recognition of the road edge line by taking the lane central line as a reference; the method comprises the following steps that due to curvature change of a curve segment, lane width information acquired by edge line identification needs to be combined with coordinate information of straight points and straight dots, and calibration of the edge of a curve lane is carried out by the radius of a circular curve; finally, adding the road marking information and the coordinate position thereof on the high-precision map plane base map according to the road marking information and the coordinate position thereof acquired on the two-dimensional road laser image;
step S33: and constructing a three-dimensional high-precision map.
8. The three-dimensional high-precision map drawing method based on the vehicle-mounted laser inertial navigation data according to claim 1, characterized by comprising the following steps: the step S33 specifically includes the following steps:
step S331: constructing the plane base map microcell cell coordinates: dividing a lane plane into micro-cell cells, and importing height information of the cells after determining the plane coordinates of each micro-cell; wherein, a 10cm × 10cm micro-cell is adopted to convert the plane base map into point cloud data;
step S332: adding lane longitudinal gradient information: firstly, converting longitudinal gradient values of all points of a lane central line into height information in the Z-axis direction by taking a coordinate origin of a plane base map as a starting point; secondly, calibrating coordinates of a cell where the lane center line is located, and importing height information of each point of the lane center line into the cell;
step S333: adding lane transverse gradient information: taking a lane transverse section line as a unit, combining a Z-axis coordinate value of a lane central line and a lane transverse gradient, and converting height information of each cell on the transverse section line; inputting the height information into each cell as a Z-axis coordinate value thereof;
step S334: and generating a three-dimensional high-precision map.
CN201910884069.8A 2019-09-19 2019-09-19 Three-dimensional high-precision map drawing method based on vehicle-mounted laser inertial navigation data Pending CN110864696A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910884069.8A CN110864696A (en) 2019-09-19 2019-09-19 Three-dimensional high-precision map drawing method based on vehicle-mounted laser inertial navigation data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910884069.8A CN110864696A (en) 2019-09-19 2019-09-19 Three-dimensional high-precision map drawing method based on vehicle-mounted laser inertial navigation data

Publications (1)

Publication Number Publication Date
CN110864696A true CN110864696A (en) 2020-03-06

Family

ID=69652234

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910884069.8A Pending CN110864696A (en) 2019-09-19 2019-09-19 Three-dimensional high-precision map drawing method based on vehicle-mounted laser inertial navigation data

Country Status (1)

Country Link
CN (1) CN110864696A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111623788A (en) * 2020-05-29 2020-09-04 长沙酷哇人工智能及大数据产业技术研究院有限公司 Construction method of high-precision map for unmanned driving
CN111678430A (en) * 2020-04-20 2020-09-18 上海城建城市运营(集团)有限公司 Road geometric linear and road surface three-dimensional structure reconstruction system and reconstruction method
CN112100311A (en) * 2020-11-19 2020-12-18 深圳市城市交通规划设计研究中心股份有限公司 Road traffic network geographic information data management method, device and system
CN112985428A (en) * 2021-04-22 2021-06-18 速度时空信息科技股份有限公司 Safety angle-based priority reference method for image layer of high-precision sensing map
CN113902830A (en) * 2021-12-08 2022-01-07 腾讯科技(深圳)有限公司 Method for generating track road network
CN113971723A (en) * 2021-10-25 2022-01-25 北京百度网讯科技有限公司 Method, device, equipment and storage medium for constructing three-dimensional map in high-precision map

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102288121A (en) * 2011-05-12 2011-12-21 电子科技大学 Method for measuring and pre-warning lane departure distance based on monocular vision
CN107505644A (en) * 2017-07-28 2017-12-22 武汉理工大学 Three-dimensional high-precision map generation system and method based on vehicle-mounted multisensory fusion
CN108036794A (en) * 2017-11-24 2018-05-15 华域汽车系统股份有限公司 A kind of high accuracy map generation system and generation method
CN108845569A (en) * 2018-04-27 2018-11-20 厦门维斯云景信息科技有限公司 Generate semi-automatic cloud method of the horizontal bend lane of three-dimensional high-definition mileage chart
CN108955702A (en) * 2018-05-07 2018-12-07 西安交通大学 Based on the lane of three-dimensional laser and GPS inertial navigation system grade map creation system
KR101954963B1 (en) * 2018-01-15 2019-03-07 주식회사 스트리스 System and Method for Automatic Construction of Numerical Digital Map and High Definition Map

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102288121A (en) * 2011-05-12 2011-12-21 电子科技大学 Method for measuring and pre-warning lane departure distance based on monocular vision
CN107505644A (en) * 2017-07-28 2017-12-22 武汉理工大学 Three-dimensional high-precision map generation system and method based on vehicle-mounted multisensory fusion
CN108036794A (en) * 2017-11-24 2018-05-15 华域汽车系统股份有限公司 A kind of high accuracy map generation system and generation method
KR101954963B1 (en) * 2018-01-15 2019-03-07 주식회사 스트리스 System and Method for Automatic Construction of Numerical Digital Map and High Definition Map
CN108845569A (en) * 2018-04-27 2018-11-20 厦门维斯云景信息科技有限公司 Generate semi-automatic cloud method of the horizontal bend lane of three-dimensional high-definition mileage chart
CN108955702A (en) * 2018-05-07 2018-12-07 西安交通大学 Based on the lane of three-dimensional laser and GPS inertial navigation system grade map creation system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
罗文婷 等: ""一种高精地图道路三维线形测量方法"", 《测绘通报》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111678430A (en) * 2020-04-20 2020-09-18 上海城建城市运营(集团)有限公司 Road geometric linear and road surface three-dimensional structure reconstruction system and reconstruction method
CN111623788A (en) * 2020-05-29 2020-09-04 长沙酷哇人工智能及大数据产业技术研究院有限公司 Construction method of high-precision map for unmanned driving
CN112100311A (en) * 2020-11-19 2020-12-18 深圳市城市交通规划设计研究中心股份有限公司 Road traffic network geographic information data management method, device and system
CN112985428A (en) * 2021-04-22 2021-06-18 速度时空信息科技股份有限公司 Safety angle-based priority reference method for image layer of high-precision sensing map
CN113971723A (en) * 2021-10-25 2022-01-25 北京百度网讯科技有限公司 Method, device, equipment and storage medium for constructing three-dimensional map in high-precision map
CN113971723B (en) * 2021-10-25 2024-04-09 北京百度网讯科技有限公司 Method, device, equipment and storage medium for constructing three-dimensional map in high-precision map
CN113902830A (en) * 2021-12-08 2022-01-07 腾讯科技(深圳)有限公司 Method for generating track road network

Similar Documents

Publication Publication Date Title
CN110864696A (en) Three-dimensional high-precision map drawing method based on vehicle-mounted laser inertial navigation data
CN111551958B (en) Mining area unmanned high-precision map manufacturing method
CN110146910B (en) Positioning method and device based on data fusion of GPS and laser radar
EP2427726B1 (en) Methods and systems for creating digital transportation networks
US11423677B2 (en) Automatic detection and positioning of pole-like objects in 3D
US11093759B2 (en) Automatic identification of roadside objects for localization
CN102506824B (en) Method for generating digital orthophoto map (DOM) by urban low altitude unmanned aerial vehicle
US11782129B2 (en) Automatic detection of overhead obstructions
CN112836737A (en) Roadside combined sensing equipment online calibration method based on vehicle-road data fusion
CN117836653A (en) Road side millimeter wave radar calibration method based on vehicle-mounted positioning device
CN107145578A (en) Map constructing method, device, equipment and system
CN113280798B (en) Geometric correction method for vehicle-mounted scanning point cloud under tunnel GNSS rejection environment
CN106525057A (en) Generation system for high-precision road map
CN102208013A (en) Scene matching reference data generation system and position measurement system
CN104677361B (en) A kind of method of comprehensive location
US11920950B2 (en) System and method for generating precise road lane map data
CN114636993A (en) External parameter calibration method, device and equipment for laser radar and IMU
CN114859374B (en) Newly-built railway cross measurement method based on unmanned aerial vehicle laser point cloud and image fusion
CN112346463A (en) Unmanned vehicle path planning method based on speed sampling
JP2000338865A (en) Data gathering device for digital road map
CN102620745B (en) Airborne inertial measurement unite (IMU) collimation axis error calibration method
CN114877838B (en) Road geometric feature detection method based on vehicle-mounted laser scanning system
KR102475042B1 (en) Apparatus and method for establishing a hd map
CN115265493B (en) Lane-level positioning method and device based on non-calibrated camera
CN116635739A (en) Road side millimeter wave radar calibration method based on vehicle-mounted positioning device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200306

RJ01 Rejection of invention patent application after publication