CN112050821B - Lane line polymerization method - Google Patents

Lane line polymerization method Download PDF

Info

Publication number
CN112050821B
CN112050821B CN202010953265.9A CN202010953265A CN112050821B CN 112050821 B CN112050821 B CN 112050821B CN 202010953265 A CN202010953265 A CN 202010953265A CN 112050821 B CN112050821 B CN 112050821B
Authority
CN
China
Prior art keywords
lane line
aggregation
point
lane
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010953265.9A
Other languages
Chinese (zh)
Other versions
CN112050821A (en
Inventor
刘立
丁亚芬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei Ecarx Technology Co Ltd
Original Assignee
Hubei Ecarx Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei Ecarx Technology Co Ltd filed Critical Hubei Ecarx Technology Co Ltd
Priority to CN202010953265.9A priority Critical patent/CN112050821B/en
Publication of CN112050821A publication Critical patent/CN112050821A/en
Application granted granted Critical
Publication of CN112050821B publication Critical patent/CN112050821B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the invention provides a lane line polymerization method, which comprises the following steps: obtaining visual crowdsourcing data; calculating coordinates of a plurality of specified lane line points of the lane line in each frame of visual crowdsourcing data under a vehicle body coordinate system as first coordinates; converting the first coordinates of the lane line points of each lane line into second coordinates in the reference coordinate system based on the conversion relation between the vehicle body coordinate system and the reference coordinate system; aggregating the lane lines within each preset time length; aggregating the first aggregation lane lines within two adjacent preset time lengths; and fitting each lane line point of the second aggregated lane line according to the second coordinate of each lane line point of the second aggregated lane line aiming at each second aggregated lane line to obtain a fitted lane line. By adopting the method provided by the embodiment of the invention, the accuracy of the determined fitted lane line is higher.

Description

Lane line polymerization method
Technical Field
The invention relates to the technical field of map drawing, in particular to a lane line aggregation method.
Background
The automatic driving technology needs to rely on high-precision road information provided by a high-precision map to perform vehicle positioning, path planning, driving decision making and the like. Therefore, how to obtain high-precision road information and create a high-precision map is important for the automatic driving technology.
The current way of collecting high-precision road information mainly comprises: the system comprises a professional data acquisition system based on a laser radar, a data acquisition system based on professional vision crowdsourcing data and a data acquisition system based on common vision crowdsourcing data. The visual crowdsourcing data is data having feature attribute information and coordinate information, which is generated by recognizing features in a scanning area in real time by a chip built in a sensor.
The road information acquired by the data acquisition system based on common visual crowdsourcing data is not high in precision, and the requirement for manufacturing a high-precision map cannot be met. The construction cost of the professional data acquisition system based on the laser radar is high, professional personnel are required to maintain the professional data acquisition system, the data acquisition period is long, the requirement for quick update of a high-precision map is difficult to meet, and the wide application of the professional data acquisition system based on the laser radar is limited. And professional vision crowdsourcing data is difficult to achieve high precision, so that data of the same area needs to be acquired frequently for many times, and high-precision road information needs to be obtained by aggregating data acquired at different times. Thus, lane lines are necessary to aggregate professional visual crowd-sourced data. However, at present, the aggregation of lane lines based on professional visual crowdsourcing data is in a starting stage, no mature aggregation method exists, and high-precision road information required by manufacturing a high-precision map cannot be obtained.
Disclosure of Invention
The embodiment of the invention aims to provide a lane line polymerization method for improving the accuracy of a lane line obtained by polymerization.
In order to achieve the above object, an embodiment of the present invention provides a lane line aggregation method, including:
acquiring each frame of visual crowdsourcing data acquired in each preset time length of a plurality of continuous preset time lengths; wherein each frame of visual crowdsourcing data comprises: coordinate information of one or more lane lines;
calculating coordinates of a plurality of specified lane line points of each frame of visual crowdsourcing data in a vehicle body coordinate system as first coordinates aiming at each lane line in each frame of visual crowdsourcing data;
converting first coordinates of each lane line point of each lane line in each frame of visual crowdsourcing data into second coordinates in a reference coordinate system based on a conversion relation between a vehicle body coordinate system and the reference coordinate system;
aggregating the lane lines in the multi-frame visual crowdsourcing data in each preset time length to obtain one or more first aggregated lane lines;
polymerizing the first polymerization lane lines within two adjacent preset time lengths to obtain one or more second polymerization lane lines;
and for each second aggregation lane line, fitting each lane line point of the second aggregation lane line according to the second coordinate of each lane line point of the second aggregation lane line to obtain a fitted lane line.
In order to achieve the above object, an embodiment of the present invention provides an electronic device, which includes a processor, a communication interface, a memory, and a communication bus, where the processor and the communication interface are configured to complete communication between the memory and the processor through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing any one of the steps of the lane line aggregation method when executing the program stored in the memory.
In order to achieve the above object, an embodiment of the present invention provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements any of the above steps of the lane line aggregation method.
In order to achieve the above object, an embodiment of the present invention further provides a computer program product containing instructions, which when run on a computer, causes the computer to perform any of the above-mentioned lane line aggregation method steps.
The embodiment of the invention has the following beneficial effects:
by adopting the method provided by the embodiment of the invention, each frame of visual crowdsourcing data collected in each preset time length in a plurality of continuous preset time lengths is obtained; calculating coordinates of a plurality of specified lane line points of each frame of visual crowdsourcing data in a vehicle body coordinate system as first coordinates aiming at each lane line in each frame of visual crowdsourcing data; converting first coordinates of each lane line point of each lane line in each frame of visual crowdsourcing data into second coordinates in a reference coordinate system based on a conversion relation between a vehicle body coordinate system and the reference coordinate system; aggregating the lane lines in the multi-frame visual crowdsourcing data in each preset time length to obtain one or more first aggregated lane lines; polymerizing the first polymerization lane lines within two adjacent preset time lengths to obtain one or more second polymerization lane lines; and for each second aggregation lane line, fitting each lane line point of the second aggregation lane line according to the second coordinate of each lane line point of the second aggregation lane line to obtain a fitted lane line. The method comprises the steps of aggregating lane lines in multi-frame visual crowdsourcing data in each preset time length to obtain a first aggregated lane line, aggregating the first aggregated lane lines in two adjacent preset time lengths to obtain a second aggregated lane line, and enabling the accuracy of the determined fitted lane line to be higher through multi-stage aggregation.
Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a lane line aggregation method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of lane lines included in a frame of visual crowdsourcing data according to an embodiment of the invention;
fig. 3 is a flowchart of aggregating lane lines in multi-frame visual crowdsourcing data in each preset time period according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a plurality of frames of visual crowdsourcing data included within a preset time period β;
fig. 5 is a flowchart of lane line fitting for lane line points of a lane line having the same identifier according to an embodiment of the present invention;
fig. 6 is a flowchart illustrating clustering of a first aggregation lane line in current aggregation start data and a first aggregation lane line in current end data according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating a plurality of pseudo coordinate points according to an embodiment of the present invention;
fig. 8 is a schematic diagram illustrating a first aggregated lane line corresponding to a pseudo coordinate point having the same clustering label is clustered into one category according to an embodiment of the present invention;
FIG. 9a is a flowchart of a method for fitting the lane line points of each second aggregate lane line to obtain a fitted lane line according to an embodiment of the present invention;
FIG. 9b is a schematic diagram of a fitted lane line obtained by fitting the lane line points of each second fitted lane line according to an embodiment of the present invention;
fig. 10a is a flowchart illustrating smoothing the fitted lane line to obtain a target lane line according to an embodiment of the present invention;
fig. 10b is a schematic diagram of smoothing the fitted lane line to obtain a target lane line according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention discloses a lane line polymerization method, which comprises the following steps as shown in figure 1:
step 101, acquiring each frame of visual crowdsourcing data acquired in each preset time length of a plurality of continuous preset time lengths; wherein each frame of visual crowdsourcing data comprises: coordinate information of one or more lane lines.
Step 102, calculating coordinates of a plurality of specified lane line points of each frame of visual crowdsourcing data in a vehicle body coordinate system as first coordinates for each lane line.
Wherein the vehicle body coordinate system is as follows: a coordinate system having the center of gravity of the camera sensor mounted in the vehicle as the origin O, the direction parallel to the traveling direction of the vehicle as the Y-axis, the direction perpendicular to the traveling direction of the vehicle as the X-axis, and the direction perpendicular to the XOY plane and pointing upward as the z-axis.
And 103, converting the first coordinates of the lane line points of each lane line in each frame of visual crowdsourcing data into second coordinates in the reference coordinate system based on the conversion relation between the vehicle body coordinate system and the reference coordinate system.
And 104, aggregating the lane lines in the multi-frame visual crowdsourcing data in each preset time length to obtain one or more first aggregated lane lines.
And 105, polymerizing the first polymerization lane lines within two adjacent preset time durations to obtain one or more second polymerization lane lines.
And 106, fitting each lane line point of the second aggregated lane line according to the second coordinate of each lane line point of the second aggregated lane line to obtain a fitted lane line.
By adopting the method provided by the embodiment of the invention, each frame of visual crowdsourcing data collected in each preset time length in a plurality of continuous preset time lengths is obtained; calculating coordinates of a plurality of specified lane line points of each frame of visual crowdsourcing data in a vehicle body coordinate system as first coordinates aiming at each lane line in each frame of visual crowdsourcing data; converting first coordinates of each lane line point of each lane line in each frame of visual crowdsourcing data into second coordinates in a reference coordinate system based on a conversion relation between a vehicle body coordinate system and the reference coordinate system; aggregating the lane lines in the multi-frame visual crowdsourcing data in each preset time length to obtain one or more first aggregated lane lines; polymerizing the first polymerization lane lines within two adjacent preset time lengths to obtain one or more second polymerization lane lines; and for each second aggregation lane line, fitting each lane line point of the second aggregation lane line according to the second coordinate of each lane line point of the second aggregation lane line to obtain a fitted lane line. The method comprises the steps of aggregating lane lines in multi-frame visual crowdsourcing data in each preset time length to obtain a first aggregated lane line, aggregating the first aggregated lane lines in two adjacent preset time lengths to obtain a second aggregated lane line, and enabling the accuracy of the determined fitted lane line to be higher through multi-stage aggregation.
The method and apparatus of the present invention will be described in detail with reference to the accompanying drawings using specific embodiments.
In an embodiment of the present invention, a flow of the lane line aggregation method provided in the embodiment of the present invention may include step a 1-step a 11:
step a1, obtaining each frame of visual crowdsourcing data collected in each of a plurality of consecutive preset time periods.
Wherein, the visual crowdsourcing data is: and data having feature attribute information and coordinate information, which is generated by identifying features in the scanning area in real time by the sensor. The minimum output unit of the visual crowdsourcing data is a frame, and each frame of visual crowdsourcing data comprises: a serial number of each frame of visual crowd-sourced data, coordinate information of one or more lane lines, a curve fitting coefficient of a lane line identified at the time, coordinate information and attitude information of a vehicle at the time, and the like.
The preset time period can be specifically set according to the application condition, and one preset time period can be set to be one minute. Each preset time length can comprise a plurality of frames of visual crowdsourcing data.
In this step, can gather a plurality of continuous multiframe vision crowdsourcing data of predetermineeing the duration. For example, if a preset time period is one minute, multi-frame visual crowdsourcing data can be continuously acquired for multiple minutes.
And detecting Y coordinates of the starting point and the end point of each lane line in a vehicle body coordinate system according to each frame of crowdsourcing data, namely the distance between the detected starting point and end point of the lane line and the vehicle in the Y-axis direction, and calculating the corresponding X coordinates of the starting point and the end point according to the curve function coefficient of the lane line.
Step a2, for each lane line in each frame of visual crowdsourcing data, calculating coordinates of a plurality of specified lane line points of the lane line in a vehicle body coordinate system as first coordinates.
In this step, for each frame of crowdsourcing data, the distance from the start point and the end point of each lane line of the frame of crowdsourcing data to the vehicle in the Y-axis direction may be obtained first, and the distance is used as the ordinate of the start point and the end point of each lane line in the vehicle body coordinate system.
Then, the coordinates of the plurality of specified lane line points of the lane line in the vehicle body coordinate system may be calculated as the first coordinates using the following formula:
Xcar=C0+C1*Ycar+C2*Ycar 2+C3*Ycar 3
wherein, XcarAnd YcarRespectively determining the abscissa and the ordinate of the lane line point under a vehicle body coordinate system; y iscarIs provided by per-frame visual crowdsourcing data, C0、C1、C2、C3Curve fitting coefficient of the lane line at that time, C, provided for each frame of visual crowdsourcing data0Indicating the offset distance of the vehicle from the lane line, C1Tangent value, C, representing the yaw angle2As a curvature of the lane line, C3Is the rate of change of curvature of the lane line.
Wherein, the designated lane line point of each lane line in each frame of visual crowdsourcing data may include: the starting point of the lane line and the end point of the lane line, and a plurality of lane line points with the vertical coordinate spacing of 1m between the starting point of the lane line and the end point of the lane line.
For example, fig. 2 is a schematic diagram of lane lines collected in a frame of visual crowdsourcing data. Referring to fig. 2, the frame of visual crowdsourcing data acquires two lane lines: lane line L1 and lane line L2. The ordinate of the start point a and the end point d of the lane line L1 in the vehicle body coordinate system can be obtained from the visual crowd-sourced data: y isaAnd YdAnd the ordinate of the start point e and the end point h of the lane line L2 in the vehicle body coordinate system: y iseAnd Yh
Then, the formula X can be followed for the lane line L1car=C0+C1*Ycar+C2*Ycar 2+C3*Ycar 3And the ordinate Y of the start point a and the end point d of the lane line L1 in the vehicle body coordinate systemaAnd YdThe abscissa X of the starting point a and the end point d of the lane line L1 in the vehicle body coordinate system is calculatedaAnd Xd
Xa=CL10+CL11*Ya+CL12*Ya 2+CL13*Ya 3
Xd=CL10+CL11*Yd+CL12*Yd 2+CL13*Yd 3
Then, one lane line point may be interpolated every 1m in the ordinate under the vehicle body coordinate system between the start point a and the end point d of the lane line L1: the lane line points b and c, the abscissa of the interpolated lane line points b and c in the vehicle body coordinate system is calculated, and the lane line points a, b, c, and d may be designated as the designated lane line points of the lane line L1.
May be according to formula X for lane line L2car=C0+C1*Ycar+C2*Ycar 2+C3*Ycar 3And the ordinate of the start point e and the end point h of the lane line L2 in the vehicle body coordinate system: y iseAnd YhRespectively calculating the abscissa X of the starting point e and the end point h of the lane line L2 in the vehicle body coordinate systemeAnd Xh
Xe=CL20+CL21*Ye+CL22*Ye 2+CL23*Ye 3
Xh=CL20+CL21*Yh+CL22*Yh 2+CL23*Yh 3
Then, one lane line point may be interpolated every 1m in the ordinate under the vehicle body coordinate system between the start point e and the end point h of the lane line L2: the lane line points f and the lane line points g, the abscissa of the interpolated lane line points f and the lane line points g in the vehicle body coordinate system is calculated, and the lane line points e, f, g, and h can be taken as the specified lane line points of the lane line L2.
Step A3, based on the conversion relationship between the vehicle body coordinate system and the reference coordinate system, converting the first coordinates of the lane line points of each lane line in each frame of visual crowdsourcing data into the second coordinates in the reference coordinate system.
The reference Coordinate System may be a WGS-84(World geographic System-1984 Coordinate System, geocentric Coordinate System), the WGS-84 is a right-hand Coordinate System formed by taking an earth centroid as an origin of coordinates, taking the earth centroid to a protocol earth Pole (CTP) defined by the international time service 1984.0 as a Z-axis direction, the X-axis direction pointing to an intersection of a zero-child noon surface of BIH 1984.0 and an equator of the CTP, and the Y-axis being perpendicular to the Z-axis and the X-axis, respectively. The XOY plane refers to the horizontal plane established by the earth's centroid, the X-axis, and the Y-axis.
The conversion relationship between the vehicle body coordinate system and the reference coordinate system in the embodiment of the invention can be seen in the following formula:
Figure GDA0002740675680000071
in the embodiment of the invention, the first coordinates of each lane line point can be converted into the second coordinates in the reference coordinate system by adopting the formula. Wherein, XWGS84、YWGS84And ZWGS84Respectively represents the abscissa, ordinate and ordinate of the lane line point in the reference coordinate system, Xcar、YcarAnd ZcarRespectively represents the abscissa and the ordinate of the lane line point under the coordinate system of the vehicle bodyCoordinates and vertical coordinates, YcarAnd ZcarProvided by visual crowdsourcing data, XcarIs based on YcarAnd (4) calculating. R is a preset conversion matrix for converting the vehicle body coordinate system into the reference coordinate system:
Figure GDA0002740675680000072
wherein α, β, γ are position information and attitude information, X ', of the vehicle recorded in each frame of visual crowd-sourced data'WGS84,Y′WGS84,Z′WGS84And recording the abscissa, the ordinate and the ordinate of the track point of the vehicle for each frame of visual crowdsourcing data.
Step A4, aggregating the lane lines in the multi-frame visual crowdsourcing data in each preset time length to obtain one or more first aggregated lane lines.
Step a5, determining the curvature and the curvature rate of change of each first aggregate lane line.
In the embodiment of the invention, the curvature and the curvature change rate of the first aggregated lane line can be obtained from the curve fitting coefficient of the first aggregated lane line.
The curve fit function for the first aggregate lane line may be:
Xju1=Cju10+Cju11*Yju1+Cju12*Yju1 2+C3*Yju1 3
wherein, Xju1And Yju1Respectively are the abscissa and the ordinate of the lane line point of the first aggregation lane line; cju10Representing the offset distance of the vehicle from the first converging lane line, Cju11Tangent value, C, representing the yaw angleju12Is the curvature of the first converging lane line, Cju13Is the rate of change of curvature of the first aggregate lane line.
Step A6, selecting a first aggregation lane line in a first preset time length of a plurality of continuous preset time lengths as current aggregation starting data, and selecting a first aggregation lane line in a next adjacent preset time length of the preset time lengths as current aggregation ending data.
Step A7, clustering the first aggregation lane line in the current aggregation start data and the first aggregation lane line in the current end data based on the curvature and the curvature change rate of the first aggregation lane line.
And step A8, taking the current aggregation ending data as new current aggregation starting data, taking the first aggregation lane line in the next adjacent preset time length of the new current aggregation starting data as new current aggregation ending data, and returning to execute the step A7.
Step A9, after all the first aggregation lane lines participate in clustering, determining the first aggregation lane lines clustered into one type as second aggregation lane lines to obtain one or more second aggregation lane lines.
And step A10, fitting each lane line point of the second aggregated lane line according to the second coordinate of each lane line point of the second aggregated lane line to obtain a fitted lane line.
And step A11, smoothing the fitted lane line to obtain a target lane line.
By adopting the method provided by the embodiment of the invention, the first aggregated lane line is obtained by aggregating the lane lines in the multi-frame visual crowdsourcing data in each preset time length, and the second aggregated lane line is obtained by aggregating the first aggregated lane lines in two adjacent preset time lengths. The characteristics that the curvature and the curvature change rate of the two adjacent frames of lane lines are small in change are utilized to realize the rapid tracking of the two adjacent frames of lane lines and the rapid aggregation of per-minute lane line crowdsourcing data. And for each second aggregation lane line, fitting each lane line point of the second aggregation lane line according to the second coordinate of each lane line point of the second aggregation lane line to obtain a fitted lane line, and smoothing the fitted lane line to obtain a target lane line, so that the obtained target lane line is more consistent with the actual lane line, and the precision is higher.
In the embodiment of the present invention, fig. 3 is a flowchart for aggregating lane lines in multiple frames of visual crowdsourcing data within each preset time duration. In the step a4, aggregating the lane lines in the multi-frame visual crowdsourcing data in each preset time duration to obtain one or more first aggregated lane lines may include:
step 301, for multi-frame visual crowdsourcing data in each preset time length, sequencing the visual crowdsourcing data of each frame in the preset time length according to the sequence of the sequence number; and sequencing the multiple lane lines in the same frame of visual crowdsourcing data according to the size sequence of the abscissa in the first coordinate of the starting point of each lane line.
In the embodiment of the present invention, each preset duration may include multiple frames of visual crowdsourcing data, each frame of visual crowdsourcing data in each preset duration corresponds to one serial number, and a lane line in each frame of visual crowdsourcing data records the serial number corresponding to the frame of visual crowdsourcing data. For example, if the preset time length is one minute and each minute includes multiple frames of visual crowdsourcing data, a sequence number may be assigned to each frame of visual crowdsourcing data according to the order in which the visual crowdsourcing data is collected in the one minute, for example, the sequence number of the first frame of visual crowdsourcing data collected in the one minute may be 0, the sequence number of the second frame of visual crowdsourcing data collected in the one minute may be 1, and sequence numbers are assigned to the frames of visual crowdsourcing data collected in the one minute in sequence.
In the embodiment of the present invention, for the multiple frames of visual crowdsourcing data in each preset time duration, the frames of visual crowdsourcing data in the preset time duration may be sorted according to the sequence of the sequence numbers from small to large. If a frame of visual crowdsourcing data includes multiple lane lines, the multiple lane lines in the same frame of visual crowdsourcing data may be sorted according to the order of the abscissa in the first coordinate of the starting point of each lane line.
For example, the 1 st frame crowd-sourced data is arranged in front of the 2 nd frame crowd-sourced data, and the lane line in the 1 st frame crowd-sourced data is arranged in front of the lane line in the 2 nd frame crowd-sourced data. And if the 1 st frame crowdsourcing data contains a plurality of lane lines, the plurality of lane lines of the 1 st frame crowdsourcing data can be sorted from small to large according to the abscissa of the starting point of each lane line.
For example, fig. 4 is a schematic diagram of multiple frames of visual crowdsourcing data included within a preset time period β. Referring to fig. 4, the 0 th frame, the 1 st frame, and the 2 nd frame of visual crowdsourcing data may be included within the preset time period β. The 0 th frame of visual crowdsourcing data comprises a lane line La0 and a lane line Lb0, and the serial numbers of the lane line La0 and the lane line Lb0 are both 0; the 1 st frame of visual crowdsourcing data comprises a lane line La1 and a lane line Lb1, and the serial numbers of the lane line La1 and the lane line Lb1 are both 1; the 2 nd frame of visual crowdsourcing data includes lane line La2 and lane line Lb2, and the serial numbers of the lane line La2 and the lane line Lb2 are both 2. And the X-axis direction of each frame of visual crowdsourcing data in the preset time length beta under the vehicle body coordinate system is shown in the figure. In the embodiment of the invention, the lane lines in the 3-frame visual crowdsourcing data within the preset duration beta can be sequenced according to the sequence of the sequence numbers from small to large; and sorting the multiple lane lines in the same frame of visual crowdsourcing data according to the magnitude sequence of the abscissa in the first coordinate of the starting point of each lane line, wherein the sorted result is shown in fig. 4.
Step 302, selecting a frame of visual crowdsourcing data with the smallest serial number as a starting frame, selecting one lane line in the starting frame as a current starting lane line, and using the visual crowdsourcing data of the next frame of the starting frame as a search frame.
Step 303, selecting a lane line in the search frame, determining whether the current start lane line and the lane line are successfully matched according to the curve fitting coefficient of the current start lane line and the curve fitting coefficient of the lane line, if so, executing step 304, and if not, executing step 305.
In the embodiment of the invention, each frame of crowdsourcing data records a curve fitting coefficient of a lane line, and the curve fitting coefficient of the lane line comprises the following components: the curvature and the rate of change of curvature of the lane line. Whether the current starting lane line is successfully matched with the lane line can be judged according to the curvature and the curvature change rate of the lane line recorded by each frame of crowdsourcing data. Specifically, if the absolute value of the difference between the curvature of the current starting lane line and the curvature of the lane line is smaller than a preset curvature threshold value, and the absolute value of the difference between the curvature change rate of the current starting lane line and the curvature change rate of the lane line is smaller than a preset curvature change rate threshold value, it indicates that the current starting lane line and the lane line are successfully matched; otherwise, the current starting lane line is not successfully matched with the lane line.
For example, if the current starting lane is LAAnd the current starting lane LAThe curve function of (a) is: xA=CA0+CA1*YA+CA2*YA 2+CA3*YA 3(ii) a Wherein, CA0Indicating vehicle and current starting lane LAOffset distance of CA1Tangent value, C, representing the yaw angle of the vehicleA2For the current starting lane LACurvature of (C)A3For the current starting lane LARate of change of curvature of. If the lane line is LBAnd the lane line LBThe curve function of (a) is: xB=CB0+CB1*YB+CB2*YB 2+CB3*YB 3(ii) a Wherein, CB0Indicating vehicle and current starting lane LBOffset distance of CB1Tangent value, C, representing the yaw angle of the vehicleB2For the current starting lane LBCurvature of (C)B3For the current starting lane LBRate of change of curvature of.
The current starting lane line L can be calculatedACurvature C ofA2And the lane line LBCurvature C ofA3Absolute value of the difference between: i CA2-CB2L, |; calculating the current initial lane line LARate of change of curvature CA3And the lane line LBRate of change of curvature CB3Absolute value of the difference between: i CA3-CB3|。
If the current starting lane line LACurvature of and the lane line LBAbsolute value | C of the difference between curvatures ofA2-CB2I is less than the preset curvature threshold value and the current initial lane line LARate of change of curvature and the lane line LBAbsolute value | C of the difference between the rates of change of curvature ofA3-CB3| less than a preset curvature rate thresholdWhen it is, it indicates the current starting lane line LAAnd the lane line LBIf the matching is successful, the current starting lane line L can be setAAnd the lane line LBThe marks are the same lane line.
Wherein, the successful matching of the current starting lane line and the lane line represents that: the current starting lane line and the lane line are the same lane line or two lane lines are on the same lane.
Step 304, if the matching is successful, the current starting lane line and the lane line are marked as the same lane line, and the lane line is taken as the matched lane line.
In step 305, if the matching is not successful, it is determined whether there is any lane line not matched with the current lane line in the search frame, if so, step 306 is executed, and if not, step 307 is executed.
In step 306, one lane line in the search frame that is not matched with the current lane line is selected and returned to step 303.
Step 307, using the current starting lane line as the matched starting lane line, determining whether there are other lane lines in the starting frame except the matched starting lane line, if so, executing step 308, and if not, executing step 309.
Step 308, selecting one lane line from the other lane lines as a new current starting lane line, selecting one lane line in the search frame except the matched lane line, and returning to execute step 303.
Step 309, determining whether all the lane lines in all the visual crowdsourcing data within the preset time length are matched, if not, executing step 310, and if so, executing step 311.
Step 310, using the search frame as a new start frame, selecting one lane line in the start frame as a current start lane line, using the visual crowdsourcing data of the next frame of the start frame as a new search frame, and returning to step 303.
And 311, performing lane line fitting by using the lane line points of the lane lines with the same identification in the preset time length to obtain one or more first aggregation lane lines.
Specifically, fig. 5 is a flowchart of performing lane line fitting on each lane line point of the lane lines having the same identifier. Referring to fig. 5, in this step, performing lane line fitting using each lane line point of the lane lines with the same identifier within the preset time duration to obtain one or more first aggregated lane lines, which may include:
step 501, selecting a first preset number of lane line points from the lane line points of the lane lines with the same identification in the preset time.
The first preset number may be set to 6 or 8, and the like, and is not particularly limited.
Step 502, calculating a fitted curve model of the lane line according to the second coordinates of the first preset number of lane line points.
In this step, a RANdom SAmple Consensus (RANSAC) algorithm may be adopted to calculate a fitted curve model of the lane lines according to the second coordinates of the first preset number of lane line points. Specifically, the expression of the fitting curve model obtained by calculation is as follows:
X'=C'0+C'1*Y'+C'2*Y'2+C'3*Y'3
wherein, X 'and Y' are respectively the abscissa and the ordinate of the lane line points obtained based on the fitted curve model; c'0、C'1、C'2、C'3Curve fitting coefficient, C ', of the lane line obtained based on the fitted curve model'0Represents an offset distance, C ', of the vehicle from the lane line'1Denotes the tangent value, C 'of the yaw angle'2Is the curvature of the lane line, C'3Is the rate of change of curvature of the lane line.
Step 503, calculating an error between the second coordinates of the lane line point and the coordinates of the lane line point obtained based on the current fitted curve model for each remaining lane line point with the same identifier.
In step 504, the number of lane line points with the error value smaller than the preset error threshold is counted as the number of available points.
The preset error threshold may be set to 0.03 or 0.05, and the like, and is not particularly limited.
For example, if the preset time duration is set to be 1 minute, the first preset number is set to be 6, and if 15 lane line points of the lane lines with the same identifier are in 1 minute, 6 lane line points can be selected from the lane line points to be used for calculating the fitting curve model. In step 503, for the remaining 9 lane line points with the same identifier, the abscissa in the second coordinate of each lane line point may be substituted into the current fitted curve model to obtain the corresponding fitted ordinate, and the error between the ordinate and the fitted ordinate in the second coordinate of each lane line point may be calculated; or, the ordinate in the second coordinate of each lane line point may be substituted into the current fitted curve model to obtain the corresponding fitted abscissa, and the error between the abscissa in the second coordinate of each remaining lane line point and the fitted abscissa may be calculated. In this step, the number of lane line points having an error value smaller than a preset error threshold value among the remaining lane line points may be used as the number of available points.
Step 505, determining whether the ratio of the number of the available points to the number of all the lane line points with the same identifier is greater than a preset ratio threshold, if not, executing step 506, and if so, executing step 507.
The preset ratio threshold may be set to 0.80 or 0.85, and the like, and is not particularly limited.
Step 506, reselecting the first preset number of lane line points with the same identification as the new first preset number of lane line points, and returning to execute step 502.
And 507, selecting all the lane line points with the error values smaller than the preset error threshold, and optimizing the current fitting curve model by using the second coordinates of the lane line points with the selected error values smaller than the preset error threshold to obtain the optimized fitting curve model.
Step 508, determining whether the optimization times of the fitting curve model reaches a preset time, if so, executing step 509, and if not, executing step 510.
In step 509, the lane line determined by the optimized curve fitting model is determined as a first aggregated lane line.
And step 510, taking the optimized fitted curve model as a new current fitted curve model, and returning to execute the step 503.
The preset number of times may be set to 5 times or 6 times, and the like, and is not particularly limited. The curve fitting coefficient of the first aggregation lane line comprises the curvature and the curvature change rate of the first aggregation lane line. For example, if the determined fitted curve model of the first aggregated lane line is:
X'=C'0+C'1*Y'+C'2*Y'2+C'3*Y'3
wherein, C'2Is the curvature of the first polymeric lane line, C'3Is the rate of change of curvature of the first aggregate lane line.
In the embodiment of the present invention, DBScan (Density-Based Spatial Clustering of Applications with Noise) may be adopted to cluster the first aggregation lane line in the current aggregation start data and the first aggregation lane line in the current end data. Fig. 6 is a flowchart of clustering a first aggregation lane line in the current aggregation start data and a first aggregation lane line in the current end data. Referring to fig. 6, the clustering the first aggregation lane line in the current aggregation start data and the first aggregation lane line in the current end data based on the curvature and the curvature change rate of the first aggregation lane line in step a7 may include:
step 601, aiming at each first aggregation lane line, establishing a pseudo coordinate point by taking the curvature of the first lane line as an abscissa and the curvature change rate of the first lane line as an ordinate.
Each pseudo-coordinate point corresponds to a first aggregate lane line.
Step 602, for each pseudo coordinate point, determining all pseudo coordinate points whose euclidean distance to the pseudo coordinate point is not greater than a preset search neighborhood range as neighboring points of the pseudo coordinate point.
The preset search neighborhood range may be set to 2 or 3, and the like, and is not particularly limited.
Step 603, determining whether the number of the neighboring points of the pseudo coordinate point is smaller than a preset minimum number of neighboring points, if so, executing step 604, and if not, executing step 605.
The number of the preset minimum neighboring points may be set to 3 or 4, and the like, and is not particularly limited.
Step 604, mark the pseudo coordinate point as a noise point.
Step 605, mark the pseudo-coordinate point as a core point, and assign a cluster label to the pseudo-coordinate point.
In step 606, it is determined whether each neighboring point of the pseudo coordinate point is a core point, if not, step 607 is executed, and if yes, step 608 is executed.
In this step, it may be determined that the neighboring point is not the core point by determining all the pseudo coordinate points whose euclidean distances from the neighboring point are not greater than the preset search neighborhood range, and when the number of all the pseudo coordinate points whose euclidean distances from the neighboring point are not greater than the preset search neighborhood range is less than the preset minimum number of the neighboring points.
In step 607, the same cluster label as the pseudo coordinate point is assigned to the neighboring point, and step 608 is continued.
Step 608, the first aggregated lane lines corresponding to the pseudo coordinate points with the same clustering label are clustered into one type.
For example, if there are 6 first aggregate lane lines: first aggregated lane line LJH1A first aggregation lane line LJH2A first aggregation lane line LJH3A first aggregation lane line LJH4A first aggregation lane line LJH5And a first aggregated lane line LJH6(ii) a And the first aggregated lane line LJH1Respectively is C 'and the curvature change rate of'JH12And C'JH13A first aggregation lane line LJH2Respectively is C 'and the curvature change rate of'JH22And C'JH23First polyLane line L for joining vehicleJH3Respectively is C 'and the curvature change rate of'JH32And C'JH33A first aggregation lane line LJH4Respectively is C 'and the curvature change rate of'JH42And C'JH43A first aggregation lane line LJH5Respectively is C 'and the curvature change rate of'JH52And C'JH53And a first aggregate lane line LJH6Respectively is C 'and the curvature change rate of'JH62And C'JH63
Then, for each first aggregation lane line, a pseudo coordinate point may be established with the curvature of the first lane line as an abscissa and the curvature change rate of the first lane line as an ordinate, so as to obtain 6 pseudo coordinate points: (C'JH12,C'JH13)、(C'JH22,C'JH23)、(C'JH32,C'JH33)、(C'JH42,C'JH43)、(C'JH52,C'JH53) And (C'JH62,C'JH63)。
Referring to fig. 7, if the preset search neighborhood range is set to 3, the number of the preset minimum neighboring points is set to 3. Pseudo coordinate point (C'JH12,C'JH13) The preset search neighborhood range is a range alpha, and the range alpha further comprises: pseudo coordinate point (C'JH22,C'JH23)、(C'JH32,C'JH33) And (C'JH42,C'JH43). As can be seen, for the pseudo-coordinate point (C'JH12,C'JH13) The number of pseudo coordinate points included in the range alpha is not less than the preset minimum number of adjacent points 3. Then the dummy coordinate point (C ') may be set'JH12,C'JH13) Marked as a core point and is a pseudo coordinate point (C'JH12,C'JH13) Assigning a clustering label k 1; and aiming at a pseudo coordinate point (C'JH12,C'JH13) If the neighboring point is not the core point, the same cluster label k1 as the pseudo coordinate point is assigned to the neighboring point. The first aggregated lane lines corresponding to the pseudo coordinate points having the same cluster label k1 may then be aggregated into one class.
In an embodiment of the present invention, another process of clustering the first aggregated lane line in the current aggregation start data and the first aggregated lane line in the current end data based on the curvature and the curvature change rate of the first aggregated lane line in the step a7 may include steps B1-B5:
step B1, calculating the euclidean distance between the end point of each first aggregation lane line in the current aggregation start data and the start point of each first aggregation lane line in the current aggregation end data.
And step B2, calculating the vertical distance between the tangent line at the end point of each first aggregation lane line in the current aggregation starting data and the tangent line at the start point of each first aggregation lane line in the current aggregation ending data.
In this step, the following formula may be adopted to calculate the vertical distance between the tangent line at the end point of each first aggregation lane line in the current aggregation start data and the tangent line at the start point of each first aggregation lane line in the current aggregation end data:
Figure GDA0002740675680000151
wherein k is the slope of the tangent, b is the intercept of the tangent, and the tangent equation satisfies Xcar ═ k × Ycar + b; a fitted curve function of the first aggregated lane line.
The execution sequence of step B1 and step B2 is not particularly limited.
Step B3, for each first aggregation lane line, determining the cluster label corresponding to the first aggregation lane line according to the curvature and the curvature change rate of the first aggregation lane line.
In this step, the method described in steps 601 to 607 may be adopted to determine the cluster label corresponding to the first aggregation lane line.
Step B4, selecting a first aggregation lane line in the current aggregation initial data, and voting on the first aggregation lane line which has the same clustering label with the first aggregation lane line in the current aggregation end data; voting a first aggregation lane line with the minimum Euclidean distance between the starting point of the first aggregation lane line in the current aggregation end data and the first aggregation lane line; voting is carried out on a first aggregation lane line with the minimum vertical distance between a tangent line at the starting point of the first aggregation lane line and a tangent line at the ending point of the first aggregation lane line in the current aggregation ending data.
Specifically, in this step, a tensor voting mode may be adopted to cluster the first aggregation lane line:
selecting 1 first aggregation lane line in the current aggregation starting data, and voting 1 ticket for the first aggregation lane line which has the same clustering label with the first aggregation lane line in the current aggregation ending data; the method comprises the steps that 1 ticket is thrown to a first aggregation lane line with the minimum Euclidean distance between the starting point of the first aggregation lane line and the terminal point of the first aggregation lane line in current aggregation end data; and throwing 1 ticket to the first aggregation lane line with the minimum vertical distance between the tangent line at the starting point of the first aggregation lane line and the tangent line at the end point of the first aggregation lane line in the current aggregation end data.
See FIG. 8, Dist1Is the Euclidean distance, Dist, between the starting point of the first aggregation lane line and the end point of the first aggregation lane line in the current aggregation end data2The vertical distance between a tangent line at the starting point of the first aggregation lane line and a tangent line at the end point of the first aggregation lane line in the current aggregation end data is shown.
And step B5, counting the number of tickets of each first aggregation lane line in the current aggregation end data, and determining that the first aggregation lane line with the number of tickets not less than 2 in the current aggregation end data and the first aggregation lane line are the same lane line.
In the embodiment of the present invention, fig. 9a is a flowchart for obtaining a fitted lane line by fitting each lane line point of each second fitted lane line. Referring to fig. 9a, the fitting the respective lane line points of the second aggregated lane line according to the second coordinates of the respective lane line points of the second aggregated lane line to obtain a fitted lane line in the step a10 above may include:
step 901, selecting a second aggregation lane line, and using the starting point of the second aggregation lane line as a control point.
Step 902, taking the control point as the current point, calculating the absolute value of the difference between the course angle of the next lane line point of the current point and the course angle of the current point as the course angle difference.
Specifically, the absolute value of the difference between the course angle of the next lane line point of the current point and the course angle of the current point may be calculated as the course angle difference by using the following formula:
∑Δθi=∑(|θii-1|)
i is the serial number of the next lane line point of the current point, i-1 the current point, thetaiIs the heading angle, Δ θ, of point iiIs the absolute value of the difference in heading angle between point i and point i-1.
Step 903, determining whether the difference value of the heading angle is greater than or equal to a preset heading angle threshold, if so, executing step 904, and if not, executing step 905.
And 904, determining the next lane line point of the current point as a new control point, and returning to execute 902.
Step 905, determining the next lane line point of the current point as a new current point, calculating the absolute value of the difference between the course angle of the next lane line point of the current point and the course angle of the current point, taking the sum of the absolute value of the difference and the difference of the course angles as a new course angle difference, and returning to execute step 903.
Step 906, after all the lane line points on the second aggregated lane line are traversed, taking each control point on the second aggregated lane line as a lane line fitting point, and performing interpolation algorithm spline curve fitting to obtain a fitted lane line.
Specifically, Catmull Rom spline curve fitting can be performed, and a lane line is fitted by using a plurality of control points and used as the fitted lane line.
Step 907, selecting a new second aggregation lane line, using the starting point of the new second aggregation lane line as a control point, and returning to execute step 902.
And 908, finishing the operation after all the second aggregation lane lines are fitted.
Referring to fig. 9b, where the triangle points 910 are control points, a lane line is fitted by using a plurality of control points 910 as a fitted lane line 920.
In the embodiment of the present invention, fig. 10a is a flowchart for performing a smoothing process on the fitted lane line to obtain a target lane line. Referring to fig. 10a, the smoothing processing on the fitted lane line in step 211 to obtain the target lane line may include:
and 1001, calculating the fitting direction of each fitted lane line according to the curve fitting coefficient of each fitted lane line.
Step 1002, a fitted lane line is selected and set as a current fitted lane line.
Step 1003, determining whether a fitted lane line closest to the current fitted lane line exists within a preset radius along the fitting direction of the current fitted lane line, if the determination result is no, executing step 1004, and if the determination result is yes, executing step 1005.
And step 1004, selecting a new fitted lane line as the current fitted lane line, and returning to execute step 1003.
Step 1005, taking the fitted lane line closest to the current fitted lane line as a found lane line, and setting the starting point of the current fitted lane line and the starting point of the found lane line as control points when a gap exists between the found lane line and the current fitted lane line; and interpolating the gaps among the control points by using a spline curve fitting function to generate new lane line points.
Specifically, the spline curve fitting function may be a Catmull-Rom spline curve fitting function.
Step 1006, selecting a preset number of lane line points before the end point of the currently fitted lane line, the generated new lane line points, and a preset number of lane line points after the start point of the searched lane line, and calculating a coordinate variation amplitude between two adjacent lane line points.
Step 1007, if the amplitude of the coordinate change between two adjacent lane line points is greater than the preset amplitude threshold, determining that the lane line point is a discontinuity point.
Step 1008, selecting a discontinuity point, a preset number of lane line points before the discontinuity point, and a preset number of lane line points after the discontinuity point to establish a sliding window.
Step 1009, using the weighted moving average model to smooth the lane line points in the sliding window, and using the lane line obtained after the smoothing process as the target lane line.
In this step, the following formula may be specifically adopted to smooth the lane line points in the sliding window:
Figure GDA0002740675680000181
wherein, P'jAs a point P of a lane linejAnd (4) smoothing the three-dimensional coordinates. w is an integer which is half and is integrated with the size of a sliding window used by the weighted sliding average model, i is the serial number of all lane line points in the sliding window, and the value range is from [ -w to w],CiFor the weight, P, corresponding to each lane line point in the sliding windowj+iIs the three-dimensional coordinate of the j + i th point before smoothing, and j is the lane line point PjThe serial number of (2).
Fig. 10b is a schematic diagram of smoothing the fitted lane line.
Based on the same inventive concept, according to the lane line aggregation method provided in the above embodiment of the present invention, correspondingly, another embodiment of the present invention further provides a lane line aggregation device, which specifically includes:
the data acquisition module is used for acquiring each frame of visual crowdsourcing data acquired in each preset time length of a plurality of continuous preset time lengths; wherein each frame of visual crowdsourcing data comprises: coordinate information of one or more lane lines;
the first coordinate calculation module is used for calculating the coordinates of a plurality of specified lane line points of each frame of visual crowdsourcing data in a vehicle body coordinate system as first coordinates aiming at each lane line;
the second coordinate calculation module is used for converting the first coordinates of each lane line point of each lane line in each frame of visual crowdsourcing data into second coordinates in the reference coordinate system based on the conversion relation between the vehicle body coordinate system and the reference coordinate system;
the first lane line aggregation module is used for aggregating lane lines in the multi-frame visual crowdsourcing data in each preset time length to obtain one or more first aggregated lane lines;
the second lane line aggregation module aggregates the first aggregation lane lines within two adjacent preset time lengths to obtain one or more second aggregation lane lines;
and the lane line fitting module is used for fitting each lane line point of the second aggregated lane line according to the second coordinate of each lane line point of the second aggregated lane line to obtain a fitted lane line.
Therefore, by adopting the device provided by the embodiment of the invention, each frame of visual crowdsourcing data collected in each preset time length in a plurality of continuous preset time lengths is obtained; calculating coordinates of a plurality of specified lane line points of each frame of visual crowdsourcing data in a vehicle body coordinate system as first coordinates aiming at each lane line in each frame of visual crowdsourcing data; converting first coordinates of each lane line point of each lane line in each frame of visual crowdsourcing data into second coordinates in a reference coordinate system based on a conversion relation between a preset vehicle body coordinate system and the reference coordinate system; aggregating the lane lines in the multi-frame visual crowdsourcing data in each preset time length to obtain one or more first aggregated lane lines; polymerizing the first polymerization lane lines within two adjacent preset time lengths to obtain one or more second polymerization lane lines; and for each second aggregation lane line, fitting each lane line point of the second aggregation lane line according to the second coordinate of each lane line point of the second aggregation lane line to obtain a fitted lane line. The method comprises the steps of aggregating lane lines in multi-frame visual crowdsourcing data in each preset time length to obtain a first aggregated lane line, aggregating the first aggregated lane lines in two adjacent preset time lengths to obtain a second aggregated lane line, and enabling the accuracy of the determined fitted lane line to be higher through multi-stage aggregation.
An embodiment of the present invention further provides an electronic device, as shown in fig. 11, including a processor 1101, a communication interface 1102, a memory 1103 and a communication bus 1104, where the processor 1101, the communication interface 1102 and the memory 1103 complete mutual communication through the communication bus 1104,
a memory 1103 for storing a computer program;
the processor 1101 is configured to implement the following steps when executing the program stored in the memory 1103:
acquiring each frame of visual crowdsourcing data acquired in each preset time length of a plurality of continuous preset time lengths; wherein each frame of visual crowdsourcing data comprises: coordinate information of one or more lane lines;
calculating coordinates of a plurality of specified lane line points of each frame of visual crowdsourcing data in a vehicle body coordinate system as first coordinates aiming at each lane line in each frame of visual crowdsourcing data;
converting first coordinates of each lane line point of each lane line in each frame of visual crowdsourcing data into second coordinates in a reference coordinate system based on a conversion relation between a preset vehicle body coordinate system and the reference coordinate system;
aggregating the lane lines in the multi-frame visual crowdsourcing data in each preset time length to obtain one or more first aggregated lane lines;
polymerizing the first polymerization lane lines within two adjacent preset time lengths to obtain one or more second polymerization lane lines;
and for each second aggregation lane line, fitting each lane line point of the second aggregation lane line according to the second coordinate of each lane line point of the second aggregation lane line to obtain a fitted lane line.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
In yet another embodiment of the present invention, a computer-readable storage medium is further provided, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps of any of the lane line aggregation methods described above.
In yet another embodiment, a computer program product containing instructions is provided, which when run on a computer, causes the computer to perform any of the lane line aggregation methods described in the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the device, the electronic apparatus and the storage medium embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and the relevant points can be referred to the partial description of the method embodiments.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (9)

1. A lane line aggregation method, comprising:
acquiring each frame of visual crowdsourcing data acquired in each preset time length of a plurality of continuous preset time lengths; wherein each frame of visual crowdsourcing data comprises: coordinate information of one or more lane lines;
calculating coordinates of a plurality of specified lane line points of each frame of visual crowdsourcing data in a vehicle body coordinate system as first coordinates aiming at each lane line in each frame of visual crowdsourcing data;
converting first coordinates of each lane line point of each lane line in each frame of visual crowdsourcing data into second coordinates in a reference coordinate system based on a conversion relation between a vehicle body coordinate system and the reference coordinate system;
aggregating the lane lines in the multi-frame visual crowdsourcing data in each preset time length to obtain one or more first aggregated lane lines;
polymerizing the first polymerization lane lines within two adjacent preset time lengths to obtain one or more second polymerization lane lines;
for each second aggregation lane line, fitting each lane line point of the second aggregation lane line according to the second coordinate of each lane line point of the second aggregation lane line to obtain a fitted lane line;
the polymerizing the first polymerization lane line in two adjacent preset time lengths to obtain one or more second polymerization lane lines includes:
determining the curvature and the curvature change rate of each first aggregation lane line;
selecting a first aggregation lane line in a first preset time length of a plurality of continuous preset time lengths as current aggregation starting data, and selecting a first aggregation lane line in a next adjacent preset time length of the preset time lengths as current aggregation ending data;
clustering a first aggregation lane line in the current aggregation starting data and a first aggregation lane line in the current ending data based on the curvature and the curvature change rate of the first aggregation lane line;
taking the current aggregation ending data as new current aggregation starting data, taking a first aggregation lane line in the next adjacent preset time length of the new current aggregation starting data as new current aggregation ending data, and returning to the step of clustering the first aggregation lane line in the current aggregation starting data and the first aggregation lane line in the current aggregation ending data based on the curvature and the curvature change rate of the first aggregation lane line until all the first aggregation lane lines participate in clustering;
and determining the first aggregation lane lines aggregated into one type as second aggregation lane lines to obtain one or more second aggregation lane lines.
2. The method of claim 1, wherein for each lane line in each frame of visual crowdsourcing data, calculating coordinates of a plurality of specified lane line points of the lane line in a body coordinate system as first coordinates comprises:
for each lane line in each frame of visual crowdsourcing data, calculating the coordinates of a plurality of specified lane line points of the lane line under a vehicle body coordinate system by adopting the following formula as first coordinates:
Xcar=C0+C1*Ycar+C2*Ycar 2+C3*Ycar 3
wherein, XcarAnd YcarRespectively are the abscissa and the ordinate of the lane line point under the coordinate system of the vehicle body; y iscarIs provided by per-frame visual crowdsourcing data, C0、C1、C2、C3Curve fitting coefficient of the lane line at that time, C, provided for each frame of visual crowdsourcing data0Indicating the offset distance of the vehicle from the lane line, C1Tangent value, C, representing the yaw angle2As a curvature of the lane line, C3Is the rate of change of curvature of the lane line.
3. The method of claim 1, wherein converting the first coordinates of the respective lane line points of each lane line in each frame of visual crowd-sourced data to the second coordinates in the reference coordinate system based on a conversion relationship between the body coordinate system and the reference coordinate system comprises:
converting the first coordinates of the respective lane line points into second coordinates in a reference coordinate system using the following formula:
Figure FDA0003136078760000021
wherein, XWGS84、YWGS84And ZWGS84Respectively representing the abscissa, the ordinate and the ordinate of the lane line point under a reference coordinate system, wherein R is a preset conversion matrix converted from a vehicle body coordinate system to the reference coordinate system; xcar、YcarAnd ZcarRespectively represents the abscissa, ordinate and ordinate of the lane line point in the coordinate system of the vehicle body, YcarAnd ZcarProvided by visual crowdsourcing data, XcarIs based on YcarAnd (4) calculating.
4. The method of claim 1, wherein visually crowdsourcing data per frame further comprises: the serial number of each frame of visual crowdsourcing data and the curve fitting coefficient of the lane line at the moment;
the lane line in the multiframe vision crowdsourcing data to every in predetermineeing the duration is gathered, obtains one or more first gathering lane lines, includes:
sequencing the frames of visual crowdsourcing data in each preset time length according to the sequence of the sequence number aiming at the multiframe visual crowdsourcing data in each preset time length; sequencing a plurality of lane lines in the same frame of visual crowdsourcing data according to the size sequence of the abscissa in the first coordinate of the starting point of each lane line;
selecting a frame of visual crowdsourcing data with the minimum serial number as an initial frame, selecting one lane line in the initial frame as a current initial lane line, and using the next frame of visual crowdsourcing data of the initial frame as a search frame;
selecting a lane line in a search frame, judging whether the current initial lane line is successfully matched with the lane line according to the curve fitting coefficient of the current initial lane line and the curve fitting coefficient of the lane line, if so, marking the current initial lane line and the lane line as the same lane line, and taking the lane line as the matched lane line; if the matching is not successful, continuing to select the next lane line in the search frame and returning to execute the step; wherein, the successful matching of the two lane lines represents that: the two lane lines are the same lane line or the two lane lines are on the same lane;
when each lane line in the search frame is matched with the current lane line, taking the current starting lane line as the matched starting lane line, and judging whether other lane lines exist in the starting frame except the matched starting lane line;
if other lane lines exist, selecting one lane line from the other lane lines as a new current starting lane line, selecting one lane line except the matched lane line in the search frame, and returning to the step of judging whether the current starting lane line is successfully matched with the lane line;
if no other lane lines exist, taking the search frame as a new initial frame and selecting one lane line in the initial frame as a current initial lane line, taking the next frame of visual crowdsourcing data of the initial frame as a new search frame, returning to the step of selecting one lane line in the search frame, and judging whether the current initial lane line and the lane line are successfully matched until all the lane lines in the visual crowdsourcing data in the preset time length are matched;
and performing lane line fitting by using each lane line point of the lane lines with the same identification in the preset time length to obtain one or more first aggregation lane lines.
5. The method of claim 4, wherein the performing lane line fitting using the lane line points of the lane lines with the same identifier within the preset time period to obtain one or more first aggregated lane lines comprises:
selecting a first preset number of lane line points from all lane line points of lane lines with the same identification in the preset time;
calculating a fitted curve model of the lane line according to the second coordinates of the first preset number of lane line points;
calculating the error between the second coordinates of the lane line points and the coordinates of the lane line points obtained based on the current fitted curve model aiming at the rest lane line points with the same identification;
counting the number of lane line points with error values smaller than a preset error threshold value as the number of available points;
judging whether the ratio of the number of the available points to the number of all lane line points with the same identification is larger than a preset ratio threshold value or not;
if not, reselecting a first preset number of lane line points with the same identification as a new first preset number of lane line points, and returning to the step of calculating the fitting curve model of the aggregated lane line according to the second coordinates of the first preset number of lane line points;
if so, selecting all the lane line points with the error values smaller than the preset error threshold value, and optimizing the current fitting curve model by using the second coordinates of all the lane line points with the selected error values smaller than the preset error threshold value to obtain an optimized fitting curve model;
judging whether the optimization times of the fitting curve model reach preset times or not;
if not, taking the optimized fitted curve model as a new current fitted curve model, returning to the step of calculating the error between the second coordinate of the lane line point and the coordinate of the lane line point obtained based on the current fitted curve model for each remaining lane line point with the same identifier;
and if so, determining the lane line determined by the optimized curve fitting model as a first aggregation lane line.
6. The method of claim 1, wherein clustering the first aggregated lane line in the current aggregated start data and the first aggregated lane line in the current end data based on a curvature of the first aggregated lane line and a rate of change of curvature comprises:
aiming at each first aggregation lane line, establishing a pseudo coordinate point by taking the curvature of the first lane line as an abscissa and the curvature change rate of the first lane line as an ordinate; each pseudo coordinate point corresponds to a first aggregation lane line;
determining all the pseudo coordinate points of which the Euclidean distance from the pseudo coordinate point is not more than a preset search neighborhood range as the adjacent points of the pseudo coordinate point aiming at each pseudo coordinate point;
if the number of the adjacent points of the pseudo coordinate point is less than the preset minimum number of the adjacent points, marking the pseudo coordinate point as a noise point;
if the number of the adjacent points of the pseudo-coordinate point is larger than or equal to the preset minimum number of the adjacent points, marking the pseudo-coordinate point as a core point, and distributing a clustering label for the pseudo-coordinate point; judging whether the adjacent point is a core point or not aiming at each adjacent point of the pseudo-coordinate point, and if the adjacent point is not the core point, distributing the same clustering label as the pseudo-coordinate point to the adjacent point;
and clustering the first aggregation lane lines corresponding to the pseudo coordinate points with the same clustering labels into one type.
7. The method of claim 1, further comprising, before the clustering the first aggregated lane line in the current aggregation start data and the first aggregated lane line in the current end data based on a curvature of the first aggregated lane line and a curvature change rate:
calculating the Euclidean distance between the end point of each first aggregation lane line in the current aggregation starting data and the start point of each first aggregation lane line in the current aggregation ending data;
calculating the vertical distance between the tangent line at the end point of each first aggregation lane line in the current aggregation initial data and the tangent line at the start point of each first aggregation lane line in the current aggregation end data;
the clustering a first aggregation lane line in the current aggregation start data and a first aggregation lane line in the current end data based on the curvature and the curvature change rate of the first aggregation lane line includes:
for each first aggregation lane line, determining a clustering label corresponding to the first aggregation lane line according to the curvature and the curvature change rate of the first aggregation lane line;
selecting a first aggregation lane line in the current aggregation starting data, and voting on the first aggregation lane line which has the same clustering label with the first aggregation lane line in the current aggregation ending data;
voting a first aggregation lane line with the minimum Euclidean distance between the starting point of the first aggregation lane line in the current aggregation end data and the first aggregation lane line;
voting a first aggregation lane line with the minimum vertical distance between a tangent line at the starting point of the first aggregation lane line and a tangent line at the ending point of the first aggregation lane line in the current aggregation end data;
counting the number of tickets of each first aggregation lane line in the current aggregation end data, and determining that the first aggregation lane line with the number of tickets not less than 2 in the current aggregation end data and the first aggregation lane line are the same lane line.
8. The method of claim 1, wherein for each second aggregate lane line, fitting the respective lane line points of the second aggregate lane line according to the second coordinates of the respective lane line points of the second aggregate lane line to obtain a fitted lane line comprises:
selecting a second aggregation lane line, taking the starting point of the second aggregation lane line as a control point, taking the control point as a current point, and calculating the absolute value of the difference between the course angle of the next lane line point of the current point and the course angle of the current point as a course angle difference;
judging whether the difference value of the course angles is larger than or equal to a preset course angle threshold value or not;
if yes, determining the next lane line point of the current point as a new control point, returning to the step of taking the control point as the current point, and calculating the absolute value of the difference between the course angle of the next lane line point of the current point and the course angle of the current point as a course angle difference;
if not, determining the next lane line point of the current point as a new current point, calculating the absolute value of the difference between the course angle of the next lane line point of the current point and the course angle of the current point, and taking the sum of the absolute value of the difference and the difference of the course angles as a new course angle difference; returning to the step of judging whether the difference value of the course angle is larger than or equal to a preset course angle threshold value;
after all the lane line points on the second polymerization lane line are traversed, taking each control point on the second polymerization lane line as a lane line fitting point, and carrying out interpolation algorithm spline curve fitting to obtain a fitted lane line;
and selecting a new second aggregation lane line, returning to the step of taking the starting point of the second aggregation lane line as a control point, taking the control point as the current point, and calculating the absolute value of the difference between the course angle of the next lane line point of the current point and the course angle of the current point as the course angle difference until all the second aggregation lane lines are fitted.
9. The method of claim 1, further comprising: smoothing the fitted lane line to obtain a target lane line, comprising:
calculating the fitting direction of each fitted lane line according to the curve fitting coefficient of each fitted lane line;
selecting a fitted lane line as a current fitted lane line, and searching whether a fitted lane line with the closest distance to the current fitted lane line exists in a preset radius along the fitting direction of the current fitted lane line;
if not, selecting a new fitted lane line, returning to the fitting direction along the current fitted lane line, and searching whether a fitted lane line closest to the current fitted lane line exists within a preset radius;
if the current fitted lane line exists, the fitted lane line closest to the current fitted lane line is used as a searched lane line, and when a gap exists between the searched lane line and the current fitted lane line, the starting point of the current fitted lane line and the starting point of the searched lane line are set as control points; interpolating gaps among the control points by using a spline curve fitting function to generate new lane line points;
selecting a preset number of lane line points before the end point of the current fitted lane line, the generated new lane line points and a preset number of lane line points after the start point of the searched lane line, and calculating the coordinate change amplitude between two adjacent lane line points;
if the amplitude of the coordinate change between two adjacent lane line points is larger than a preset amplitude threshold value, determining that the lane line point is a catastrophe point;
selecting a catastrophe point, a preset number of lane line points before the catastrophe point and a preset number of lane line points after the catastrophe point to establish a sliding window;
and smoothing the lane line points in the sliding window by using a weighted sliding average model, and taking the lane line obtained after smoothing as a target lane line.
CN202010953265.9A 2020-09-11 2020-09-11 Lane line polymerization method Active CN112050821B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010953265.9A CN112050821B (en) 2020-09-11 2020-09-11 Lane line polymerization method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010953265.9A CN112050821B (en) 2020-09-11 2020-09-11 Lane line polymerization method

Publications (2)

Publication Number Publication Date
CN112050821A CN112050821A (en) 2020-12-08
CN112050821B true CN112050821B (en) 2021-08-20

Family

ID=73610794

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010953265.9A Active CN112050821B (en) 2020-09-11 2020-09-11 Lane line polymerization method

Country Status (1)

Country Link
CN (1) CN112050821B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112734927B (en) * 2021-03-31 2021-06-25 湖北亿咖通科技有限公司 Method and device for simplifying high-precision map lane line and computer storage medium
CN113428179B (en) * 2021-07-30 2022-06-28 广州文远知行科技有限公司 Method and device for detecting lane distance, computer equipment and storage medium
CN113551664B (en) * 2021-08-02 2022-02-25 湖北亿咖通科技有限公司 Map construction method and device, electronic equipment and storage medium
CN113591730B (en) * 2021-08-03 2023-11-10 湖北亿咖通科技有限公司 Method, device and equipment for identifying lane grouping lines
CN114399588B (en) * 2021-12-20 2022-11-11 禾多科技(北京)有限公司 Three-dimensional lane line generation method and device, electronic device and computer readable medium
CN117739950B (en) * 2023-12-21 2024-06-14 万物镜像(北京)计算机系统有限公司 Map generation method, device and equipment

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103940434A (en) * 2014-04-01 2014-07-23 西安交通大学 Real-time lane line detecting system based on monocular vision and inertial navigation unit
CN104751151A (en) * 2015-04-28 2015-07-01 苏州安智汽车零部件有限公司 Method for identifying and tracing multiple lanes in real time
JP2018097424A (en) * 2016-12-08 2018-06-21 日本電信電話株式会社 Clustering apparatus, artifact identification apparatus, trunk detection apparatus, method, and program
CN108286979A (en) * 2017-01-09 2018-07-17 北京四维图新科技股份有限公司 A kind of method and apparatus and navigation system, control loop obtaining high-precision navigation path data
CN109084782A (en) * 2017-06-13 2018-12-25 蔚来汽车有限公司 Lane line map constructing method and building system based on camera sensing device
CN109255316A (en) * 2018-08-30 2019-01-22 深圳市路畅科技股份有限公司 A kind of lane shift detection method and system
CN109300139A (en) * 2018-09-30 2019-02-01 百度在线网络技术(北京)有限公司 Method for detecting lane lines and device
CN109409202A (en) * 2018-09-06 2019-03-01 惠州市德赛西威汽车电子股份有限公司 Robustness method for detecting lane lines based on dynamic area-of-interest
CN109916416A (en) * 2019-01-29 2019-06-21 腾讯科技(深圳)有限公司 Lane line data processing and update method, device and equipment
CN110345952A (en) * 2019-07-09 2019-10-18 同济人工智能研究院(苏州)有限公司 A kind of serializing lane line map constructing method and building system
CN110426051A (en) * 2019-08-05 2019-11-08 武汉中海庭数据技术有限公司 A kind of lane line method for drafting, device and storage medium
CN111088737A (en) * 2019-12-31 2020-05-01 中国公路工程咨询集团有限公司 Method and system for single-horn intercommunicating grade separation linear design
JP2020094830A (en) * 2018-12-10 2020-06-18 トヨタ自動車株式会社 Map generation system
CN111353466A (en) * 2020-03-12 2020-06-30 北京百度网讯科技有限公司 Lane line recognition processing method, lane line recognition processing device, and storage medium
CN111462029A (en) * 2020-03-27 2020-07-28 北京百度网讯科技有限公司 Visual point cloud and high-precision map fusion method and device and electronic equipment
CN111611958A (en) * 2020-05-28 2020-09-01 武汉四维图新科技有限公司 Method, device and equipment for determining lane line shape in crowdsourcing data

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7522091B2 (en) * 2002-07-15 2009-04-21 Automotive Systems Laboratory, Inc. Road curvature estimation system
EP3631675B1 (en) * 2017-06-28 2023-09-13 Huawei Technologies Co., Ltd. Advanced driver assistance system and method
US11194847B2 (en) * 2018-12-21 2021-12-07 Here Global B.V. Method, apparatus, and computer program product for building a high definition map from crowd sourced data
CN111198562A (en) * 2019-12-31 2020-05-26 武汉中海庭数据技术有限公司 Preprocessing optimization method for space line characteristics of crowdsourcing fragment map

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103940434A (en) * 2014-04-01 2014-07-23 西安交通大学 Real-time lane line detecting system based on monocular vision and inertial navigation unit
CN104751151A (en) * 2015-04-28 2015-07-01 苏州安智汽车零部件有限公司 Method for identifying and tracing multiple lanes in real time
JP2018097424A (en) * 2016-12-08 2018-06-21 日本電信電話株式会社 Clustering apparatus, artifact identification apparatus, trunk detection apparatus, method, and program
CN108286979A (en) * 2017-01-09 2018-07-17 北京四维图新科技股份有限公司 A kind of method and apparatus and navigation system, control loop obtaining high-precision navigation path data
CN109084782A (en) * 2017-06-13 2018-12-25 蔚来汽车有限公司 Lane line map constructing method and building system based on camera sensing device
CN109255316A (en) * 2018-08-30 2019-01-22 深圳市路畅科技股份有限公司 A kind of lane shift detection method and system
CN109409202A (en) * 2018-09-06 2019-03-01 惠州市德赛西威汽车电子股份有限公司 Robustness method for detecting lane lines based on dynamic area-of-interest
CN109300139A (en) * 2018-09-30 2019-02-01 百度在线网络技术(北京)有限公司 Method for detecting lane lines and device
JP2020094830A (en) * 2018-12-10 2020-06-18 トヨタ自動車株式会社 Map generation system
CN109916416A (en) * 2019-01-29 2019-06-21 腾讯科技(深圳)有限公司 Lane line data processing and update method, device and equipment
CN110345952A (en) * 2019-07-09 2019-10-18 同济人工智能研究院(苏州)有限公司 A kind of serializing lane line map constructing method and building system
CN110426051A (en) * 2019-08-05 2019-11-08 武汉中海庭数据技术有限公司 A kind of lane line method for drafting, device and storage medium
CN111088737A (en) * 2019-12-31 2020-05-01 中国公路工程咨询集团有限公司 Method and system for single-horn intercommunicating grade separation linear design
CN111353466A (en) * 2020-03-12 2020-06-30 北京百度网讯科技有限公司 Lane line recognition processing method, lane line recognition processing device, and storage medium
CN111462029A (en) * 2020-03-27 2020-07-28 北京百度网讯科技有限公司 Visual point cloud and high-precision map fusion method and device and electronic equipment
CN111611958A (en) * 2020-05-28 2020-09-01 武汉四维图新科技有限公司 Method, device and equipment for determining lane line shape in crowdsourcing data

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"A robust lane detection method based on hyperbolic model";Wenhui Li等;《Soft Computing》;20191231;第9161-9174页 *
"基于实例分割的车道线检测及自适应拟合算法";田锦 等;《计算机应用》;20200710;第1932-1937页 *

Also Published As

Publication number Publication date
CN112050821A (en) 2020-12-08

Similar Documents

Publication Publication Date Title
CN112050821B (en) Lane line polymerization method
CN109087510B (en) Traffic monitoring method and device
CN110415277B (en) Multi-target tracking method, system and device based on optical flow and Kalman filtering
CN109059944B (en) Motion planning method based on driving habit learning
CN108171131B (en) Improved MeanShift-based method for extracting Lidar point cloud data road marking line
CN111912416B (en) Method, device and equipment for positioning equipment
CN112667837A (en) Automatic image data labeling method and device
CN109710708B (en) Electronic map mapping method and device
CN113551664B (en) Map construction method and device, electronic equipment and storage medium
CN110389995B (en) Lane information detection method, apparatus, device, and medium
CN112747755B (en) Method and device for determining road route, readable storage medium and map updating system
CN112036359B (en) Method for obtaining topological information of lane line, electronic device and storage medium
CN112198878B (en) Instant map construction method and device, robot and storage medium
CN115376109B (en) Obstacle detection method, obstacle detection device, and storage medium
WO2022110862A1 (en) Method and apparatus for constructing road direction arrow, electronic device, and storage medium
CN110544268B (en) Multi-target tracking method based on structured light and SiamMask network
CN116071722A (en) Lane geometric information extraction method, system, equipment and medium based on road section track
CN114137562B (en) Multi-target tracking method based on improved global nearest neighbor
CN117269952A (en) Method and device for semi-automatically labeling moving target point cloud of 4D imaging millimeter wave radar
CN112381873A (en) Data labeling method and device
CN114782496A (en) Object tracking method and device, storage medium and electronic device
CN113048988B (en) Method and device for detecting change elements of scene corresponding to navigation map
CN111426321B (en) Positioning method and device for indoor robot
CN114705180A (en) Data correction method, device and equipment for high-precision map and storage medium
CN111488771B (en) OCR hooking method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant