CN112949609B - Lane recognition method and device, medium and electronic equipment - Google Patents

Lane recognition method and device, medium and electronic equipment Download PDF

Info

Publication number
CN112949609B
CN112949609B CN202110413035.8A CN202110413035A CN112949609B CN 112949609 B CN112949609 B CN 112949609B CN 202110413035 A CN202110413035 A CN 202110413035A CN 112949609 B CN112949609 B CN 112949609B
Authority
CN
China
Prior art keywords
information
combined
lane
line
linear
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110413035.8A
Other languages
Chinese (zh)
Other versions
CN112949609A (en
Inventor
夏靖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing CHJ Automobile Technology Co Ltd
Original Assignee
Beijing CHJ Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing CHJ Automobile Technology Co Ltd filed Critical Beijing CHJ Automobile Technology Co Ltd
Priority to CN202110413035.8A priority Critical patent/CN112949609B/en
Publication of CN112949609A publication Critical patent/CN112949609A/en
Application granted granted Critical
Publication of CN112949609B publication Critical patent/CN112949609B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides a lane recognition method, a lane recognition device, a lane recognition medium and electronic equipment. The method comprises the steps of obtaining the linear information of a combined lane line from the linear information of a map lane line, and determining a matching object of the linear information of a perceived lane line by matching the linear information of the combined lane line with the linear information of the perceived lane line. The attribute information and the type information of the sensing lane line do not need to be detected, information loss caused by detection failure is avoided, and the matching effectiveness and accuracy are improved.

Description

Lane recognition method and device, medium and electronic equipment
Technical Field
The disclosure relates to the field of artificial intelligence, and in particular relates to a lane recognition method, a lane recognition device, a lane recognition medium and electronic equipment.
Background
The automatic driving automobile relies on artificial intelligence, visual computing, sensors, monitoring devices and a global positioning system to cooperate so that an automobile controller can automatically and safely operate the motor vehicle without any human initiative.
Visual perception refers to object information formed in the mind after a human visual organ distinguishes objects. In the field of artificial intelligence, visual perception is object information obtained by distinguishing acquired images.
When the automatic driving automobile runs on the expressway turn road junction, attribute information and type information of a perceived lane line are detected in visual perception output of an automatic auxiliary navigation driving function, and the lane line is represented by a third-order parabola. And then, a series of calculation is carried out by utilizing the third-order parabola to obtain the matching pair of the perceived lane line and the map lane line. The corrected lateral distance and yaw angle are then obtained by the matching pair, thereby correcting the navigation positioning value. However, in a complex scene, the method often cannot accurately detect the attribute information and the type information of the lane lines, and cannot obtain the matching relationship between the perceived lane lines and the map lane lines, so that the jump of the positioning result of the automobile is directly caused. The safety of the automatic driving automobile is affected due to the high false detection rate of the ramp.
Disclosure of Invention
The present disclosure aims to provide a lane recognition method, a lane recognition device, a lane recognition medium and an electronic device, which can solve at least one technical problem mentioned above. The specific scheme is as follows:
According to a specific embodiment of the present disclosure, in a first aspect, the present disclosure provides a lane recognition method, including:
acquiring linear information, quantity and geographic position information of a sensing lane line;
acquiring linear information of a map lane line based on the geographic position information, wherein the linear information of the map lane line and the linear information of the sensing lane line are both information represented by the same type of coordinate system;
Acquiring the linear information of each group of combined lane lines from the linear information of the map lane lines, wherein the number of each group of combined lane lines is the same as that of the perceived lane lines;
And matching the linear information of the sensing lane lines with the linear information of each group of combined lane lines, and determining a matching object of the linear information of the sensing lane lines.
Optionally, the matching the line shape information of the perceived lane line with the line shape information of each group of combined lane lines, and determining the matching object of the line shape information of the perceived lane line includes:
Acquiring the information of the sensing sampling points and the information of the combined sampling points of each group of combined lane lines which are mutually related, wherein the information of the sensing sampling points is the information of the sampling points in the linear information of the sensing lane lines, and the information of the combined sampling points is the information of the sampling points in the linear information of each group of combined lane lines;
based on the sensing sampling point information and the combination sampling point information of each group of combination lane lines, carrying out matching calculation to obtain calculation results of the corresponding groups of combination lane lines;
when the calculation result meets a preset matching condition, determining that the linear information of the combined lane line corresponding to the calculation result is a matching object of the linear information of the perceived lane line.
Optionally, the acquiring the inter-related sensing sampling point information and the combined sampling point information includes:
Determining that the abscissa information of the combined sampling point information is the same as the abscissa information of the perceived sampling point information;
Determining ordinate information of the combined sampling point information based on the abscissa information of the combined sampling point information and the line information of the combined lane line;
and determining the ordinate information of the sensing sampling point information based on the abscissa information of the sensing sampling point information and the line information of the sensing lane line.
Optionally, the acquiring the inter-related sensing sampling point information and the combined sampling point information includes:
determining that the ordinate information of the combined sampling point information is identical to the ordinate information of the perception sampling point information;
Determining abscissa information of the combined sampling point information based on the ordinate information of the combined sampling point information and the line information of the combined lane line;
And determining the abscissa information of the sensing sampling point information based on the ordinate information of the sensing sampling point information and the line information of the sensing lane line.
Optionally, the performing matching calculation based on the perceived sampling point information and the combined sampling point information of each group of combined lane lines, to obtain a calculation result of each group of combined lane lines, includes:
And calculating root mean square error based on the sensing sampling point information and the combined sampling point information of each group of combined lane lines, and obtaining a calculation result of each group of corresponding combined lane lines.
Optionally, when the calculation result meets a preset matching condition, determining that the line shape information of the combined lane line corresponding to the calculation result is a matching object of the line shape information of the perceived lane line includes:
Acquiring a plurality of effective calculation results in history, wherein the linear information of the combined lane lines in history, which is associated with the effective calculation results, is determined as a matching object for sensing the linear information of the lane lines in history;
Determining the average value of the effective calculation results as a matching threshold value;
And when the calculation result is smaller than the matching threshold value, determining that the linear information of the combined lane line corresponding to the calculation result is a matching object of the linear information of the perceived lane line.
Optionally, when the calculation result meets a preset matching condition, determining that the line shape information of the combined lane line corresponding to the calculation result is a matching object of the line shape information of the perceived lane line includes:
and when the calculation result is the minimum value in all calculation results, determining the linear information of the combined lane line corresponding to the minimum value as a matching object of the linear information of the perceived lane line.
Optionally, when the calculation result meets a preset matching condition, the method further includes:
and determining the calculation result as a valid calculation result.
Optionally, the method further comprises:
after determining that the linear information of the combined lane line is a matching object of the linear information of the perceived lane line, performing yaw calculation on the linear information of the perceived lane line and the linear information of the combined lane line to obtain a yaw value;
The geographic position information is corrected based on the yaw value.
Optionally, the linear information of the map lane line and the linear information of the perceived lane line are placed in the same coordinate system.
According to a second aspect of the present disclosure, there is provided a lane recognition apparatus comprising:
The sensing information acquisition unit is used for acquiring linear information, quantity and geographic position information of the sensing lane lines;
The map information acquisition unit is used for acquiring the linear information of the map lane line based on the geographic position information, wherein the linear information of the map lane line and the linear information of the perceived lane line are both information represented by the same type of coordinate system;
A combined information obtaining unit, configured to obtain line shape information of each group of combined lane lines from line shape information of the map lane lines, where the number of each group of combined lane lines is the same as the number of the perceived lane lines;
and the matching unit is used for matching the linear information of the sensing lane lines with the linear information of each group of combined lane lines and determining a matching object of the linear information of the sensing lane lines.
Optionally, the matching unit includes:
The first acquisition subunit is used for acquiring the sensing sampling point information and the combined sampling point information of each group of combined lane lines which are mutually related, wherein the sensing sampling point information is information of sampling points in the linear information of the sensing lane lines, and the combined sampling point information is information of sampling points in the linear information of each group of combined lane lines;
The matching calculation subunit is used for carrying out matching calculation based on the sensing sampling point information and the combination sampling point information of each group of combination lane lines to obtain calculation results of the corresponding groups of combination lane lines;
And the object determining subunit is used for determining that the linear information of the combined lane line corresponding to the calculation result is a matching object of the linear information of the perceived lane line when the calculation result meets a preset matching condition.
Optionally, the first obtaining subunit includes:
A first determining subunit, configured to determine that abscissa information of the combined sampling point information is the same as abscissa information of the perceived sampling point information;
A second determining subunit, configured to determine ordinate information of the combined sampling point information based on abscissa information of the combined sampling point information and line information of the combined lane line;
and the third determination subunit is used for determining the ordinate information of the sensing sampling point information based on the abscissa information of the sensing sampling point information and the linear information of the sensing lane line.
Optionally, the first obtaining subunit includes:
the ordinate determining subunit is used for determining that the ordinate information of the combined sampling point information is the same as the ordinate information of the perception sampling point information;
A first abscissa determining subunit, configured to determine abscissa information of the combined sampling point information based on ordinate information of the combined sampling point information and line information of the combined lane line;
and the second abscissa determining subunit is used for determining the abscissa information of the sensing sampling point information based on the ordinate information of the sensing sampling point information and the linear information of the sensing lane line.
Optionally, the matching calculation subunit includes:
The first result obtaining subunit is used for calculating root mean square error based on the sensing sampling point information and the combined sampling point information of each group of combined lane lines and obtaining a calculation result of each group of corresponding combined lane lines.
Optionally, the object determining subunit includes:
a second acquisition subunit configured to acquire a plurality of valid calculation results in history, and the linear information of the historically combined lane lines associated with the valid calculation results is determined as a matching object that historically perceives the linear information of the lane lines;
a threshold value obtaining subunit, configured to determine an average value of the valid calculation results as a matching threshold value;
and the fourth determination subunit is used for determining that the linear information of the combined lane line corresponding to the calculation result is a matching object of the linear information of the perceived lane line when the calculation result is smaller than the matching threshold value.
Optionally, the object determining subunit includes:
And a fifth determining subunit, configured to determine, when the calculation result is the minimum value in all calculation results, that the line shape information of the combined lane line corresponding to the minimum value is a matching object of the line shape information of the perceived lane line.
Optionally, the matching unit further includes:
and a sixth determining subunit, configured to determine that the calculation result is a valid calculation result when the calculation result meets a preset matching condition.
Optionally, the apparatus further includes:
the yaw computing unit is used for performing yaw computing on the linear information of the perceived lane line and the linear information of the combined lane line after determining that the linear information of the combined lane line is a matched object of the linear information of the perceived lane line, so as to obtain a yaw value;
and a correction unit configured to correct the geographical position information based on the yaw value.
Optionally, the linear information of the map lane line and the linear information of the perceived lane line are placed in the same coordinate system.
According to a third aspect of the disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the lane recognition method according to any one of the first aspects.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising: one or more processors; storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the lane recognition method according to any one of the first aspects.
Compared with the prior art, the scheme of the embodiment of the disclosure has at least the following beneficial effects:
the method comprises the steps of obtaining the linear information of a combined lane line from the linear information of a map lane line, and determining a matching object of the linear information of a perceived lane line by matching the linear information of the combined lane line with the linear information of the perceived lane line. The attribute information and the type information of the sensing lane line do not need to be detected, information loss caused by detection failure is avoided, and the matching effectiveness and accuracy are improved.
Drawings
FIG. 1 illustrates a flow chart of a lane identification method according to an embodiment of the present disclosure;
Fig. 2 illustrates line shape information of a lane line in a lane recognition method according to an embodiment of the present disclosure;
fig. 3 shows a block diagram of a unit of a lane recognition apparatus according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
Alternative embodiments of the present disclosure are described in detail below with reference to the drawings.
Some embodiments provided for in this disclosure are embodiments of a lane recognition method.
Example 1
Embodiments of the present disclosure are described in detail below in conjunction with fig. 1.
Step S101, linear information, quantity and geographic position information of the sensing lane lines are acquired.
In a lane, a vehicle acquires a lane image through a camera arranged in front of the vehicle, and the lane image comprises an image of a lane line. The perceived lane line is understood as a lane line image that can be recognized from the lane image. In the field of artificial intelligence, line information of perceived lane lines is used to characterize lane line images in lane images. The disclosed embodiments determine the exact location of a vehicle in a roadway through line-shaped information of important perceived lane lines associated with the vehicle.
The number of perceived lane lines that can be identified is not equal to the number of lane line images in the lane image due to the limitation of the lane image, for example, as shown in fig. 2, the lane image includes: the line A 1, the line B 1, the line C 1 and the line D 1 are lanes for the vehicle to travel between the line B 1 and the line C 1, and after the lane image is visually perceived, the acquired perceived lane lines only comprise the line B 1 and the line C 1 on two sides of the lane for the vehicle to travel; the line shape information of the lane lines, that is, the line shape information of the B 1 line and the line shape information of the C 1 line, is perceived, and thus, the number of perceived lane lines is 2.
In general, linear information of a lane line is represented by a third-order parabola, that is:
y=Ax3+Bx2+Cx+D;
Wherein A is represented as a curvature derivative;
B is represented as curvature;
c is represented as yaw angle;
D is denoted as offset;
x and y represent coordinate points of the linear information.
Geographic location information is typically obtained by satellite positioning systems (e.g., GPS systems, beidou systems, and gnonass systems) or mobile communication systems.
And step S102, acquiring the linear information of the map lane line based on the geographic position information.
The map information database stores the corresponding relation between the geographic position information and the original linear information of the map lane line. The linear information of the map lane line can be obtained from the map information database through the geographic position information. The linear information of the map lane line is information represented by an earth coordinate system, and the linear information of the perceived lane line acquired by the vehicle is information represented by a vehicle body coordinate system (for example, the vehicle body is the origin of the coordinate system). In order to facilitate matching, the linear information of the map lane line and the linear information of the sensing lane line may be represented by the same type of coordinate system, and further, the linear information of the map lane line and the linear information of the sensing lane line may be represented by the global coordinate system, the vehicle body coordinate system, or other coordinate systems.
In order to facilitate matching of the line shape information of the perceived lane line and the line shape information of the combined lane line, optionally, the line shape information of the map lane line and the line shape information of the perceived lane line are placed in the same coordinate system, as shown in fig. 2.
And step S103, obtaining the linear information of each group of combined lane lines from the linear information of the map lane lines.
Each group of combined lane lines comprises a combination of a plurality of map lane lines, and the number of the combined lane lines is the same as the number of the sensing lane lines.
For example, as shown in fig. 2, the map lane lines include 4 lines: a 2 line, a B 2 line, a C 2 line, and a D 2 line; if the perceived lane line includes 2 lines: b 1 lines and C 1 lines, 6 sets of combined lane lines consisting of 2 map lane lines can be obtained from the map lane lines: a 2 line and B 2 line, a 2 line and C 2 line, a 2 line and D 2 line, B 2 line and C 2 line, B 2 line and D 2 line, C 2 line and D 2 line.
It is an object of embodiments of the present disclosure to obtain, from among the line shape information of multiple sets of combined lane lines, line shape information of a set of combined lane lines that matches the line shape information of the perceived lane line. It is understood that the lane lines along which the vehicle is traveling match the set of combined lane lines.
Step S104, the line-shaped information of the sensing lane lines is matched with the line-shaped information of each group of combined lane lines, and the matching object of the line-shaped information of the sensing lane lines is determined.
According to the embodiment of the disclosure, the linear information of the combined lane line is obtained from the linear information of the map lane line, and the matching object of the linear information of the perceived lane line is determined by matching the linear information of the combined lane line with the linear information of the perceived lane line. The attribute information and the type information of the sensing lane line do not need to be detected, information loss caused by detection failure is avoided, and the matching effectiveness and accuracy are improved.
Example two
Since the embodiments of the present disclosure are further optimized based on the first embodiment, the explanation based on the same method and the meaning of the same name is the same as the first embodiment, and will not be repeated here.
And S111, obtaining the linear information, the number and the geographic position information of the perceived lane lines.
And step S112, acquiring the linear information of the map lane line based on the geographic position information.
The linear information of the map lane lines and the linear information of the sensing lane lines are both information represented by the same type of coordinate system.
Optionally, the linear information of the map lane line and the linear information of the perceived lane line are placed in the same coordinate system.
Step S113, obtaining the line shape information of each group of combined lane lines from the line shape information of the map lane lines.
Each group of combined lane lines comprises a plurality of combinations of map lane lines, and the number of each group of combined lane lines is the same as the number of the sensing lane lines;
Step S114, obtaining the mutually related sensing sampling point information and the combined sampling point information of each group of combined lane lines.
The sensing sampling point information is sampling point information in the linear information of the sensing lane lines, and the combined sampling point information is sampling point information in the linear information of each group of combined lane lines.
The sensing sampling point information and the combination sampling point information of each group of combination lane lines are mutually related, and it can be understood that n pieces of combination sampling point information are acquired on the line information of each group of combination lane lines, and then n pieces of mutually related sensing sampling point information which occur in pairs with the combination sampling point information exist on the line information of the sensing lane lines. Similarly, n pieces of sensing sampling point information are acquired on the linear information of the sensing lane lines, and then n pieces of mutually associated combined sampling point information which appear in pairs with the sensing sampling point information exist in the linear information of each group of combined lane lines.
For example, as shown in fig. 2, the map lane lines include 4 lines: a 2 line, a B 2 line, a C 2 line, and a D 2 line; if the perceived lane line includes 2 lines: b 1 lines and C 1 lines, 6 sets of combined lane lines consisting of 2 map lane lines can be obtained from the map lane lines: a 2 line and B 2 line, a 2 line and C 2 line, a 2 line and D 2 line, B 2 line and C 2 line, B 2 line and D 2 line, C 2 line and D 2 line; the combined sampling point information of the B 2 line of the group of combined lane line B 2 lines and the C 2 line is a (x 1,y1) point, the mutual associated sensing sampling point information which appears in pairs with the (x 1,y1) point in the sensing lane line B 1 line is a (x 1,y2) point, and the two sampling point information have the same x-axis coordinate; the method can acquire the associated sensing sampling point information and the associated combination sampling point information of each group of combination lane lines.
On any group of combined lane lines, the more the combined sampling point information is acquired, the more the perceived sampling point information is correlated with the combined sampling point information, and the higher the matching accuracy is.
For the similarity consideration of step S114, the present disclosure will be described with respect to only one set of specific embodiments, and the acquiring of the inter-related perceptual sample point information and the combined sample point information includes the steps of:
Step S114a-1, determining that the abscissa information of the combined sampling point information is the same as the abscissa information of the perceived sampling point information.
For example, as shown in fig. 2, the abscissa information of the combined sampling point information and the abscissa information of the perceived sampling point information are both x 1.
Step S114a-2, determining ordinate information of the combined sampling point information based on the abscissa information of the combined sampling point information and the line shape information of the combined lane line.
For example, as shown in fig. 2, continuing with the above example, the line information of the combined lane line is expressed as:
y1=A1x1 3+B1x1 2+C1x1+D1;
Wherein A 1 is represented as a curvature derivative in the line information of the combined lane line;
B 1 is represented as a curvature in the line information of the combined lane line;
C 1 denotes a yaw angle in the line information of the combined lane line;
D 1 denotes an offset in the line information of the combined lane line;
x 1 and y 1 represent coordinate points of the linear information of the combined lane lines;
and carrying the abscissa information x 1 of the combined sampling point information into a linear information formula of a combined lane line to obtain the ordinate information y 1 of the combined sampling point information.
Step S114a-3, determining the ordinate information of the sensing sampling point information based on the abscissa information of the sensing sampling point information and the line information of the sensing lane line.
For example, continuing with the above example, as shown in FIG. 2, the line information of the perceived lane line is represented as:
y1=A2x1 3+B2x1 2+C2x1+D2;
wherein A 2 is represented as a curvature derivative in the line information of the perceived lane line;
b 2 is denoted as the curvature in the line information of the perceived lane line;
c 2 denotes a yaw angle in the line information of the perceived lane line;
D 2 denotes an offset in the line information of the perceived lane line;
x 2 and y 2 represent coordinate points of linear information of the perceived lane line;
and carrying the abscissa information x 2 of the sensing sampling point information into a linear information formula of a sensing lane line to obtain the ordinate information y 2 of the sensing sampling point information.
Or in another specific embodiment, the acquiring the inter-related sensing sampling point information and the combined sampling point information includes the following steps:
step S114b-1, determining that the ordinate information of the combined sampling point information is the same as the ordinate information of the perceived sampling point information.
Step S114b-2, determining abscissa information of the combined sampling point information based on the ordinate information of the combined sampling point information and the line shape information of the combined lane line.
Step S114b-3, determining the abscissa information of the sensing sampling point information based on the ordinate information of the sensing sampling point information and the line information of the sensing lane line.
Through the specific embodiment, the sensing sampling point information and the combination sampling point information corresponding to any group of combination lane lines can be obtained.
Step S115, performing a matching calculation based on the perceived sampling point information and the combined sampling point information of each group of combined lane lines, and obtaining a calculation result of each group of combined lane lines.
Specifically, the method comprises the following steps:
And step S115-1, calculating root mean square error based on the perceived sampling point information and the combined sampling point information of each group of combined lane lines, and obtaining a calculation result of each group of corresponding combined lane lines.
The root mean square error (RMSE for short, english full name Root Mean Square Error) is also known as standard error, which is defined as: i=1, 2,3, … … n. In a limited number of measurements, the root mean square error is often expressed by the following formula:
wherein: n represents the number of the perceived sampling point information and the combined sampling point information which are associated with each other and appear in pairs corresponding to each group of combined lane lines;
di is expressed as the deviation of the correlated perceived sampling point information and the combined sampling point information of each group of combined lane lines;
RMSE represents the calculation result for each set of combined lane lines.
If the root mean square error statistical distribution is a normal distribution, the probability that the random error falls within + -sigma is 68%.
And step S116, when the calculation result meets a preset matching condition, determining the linear information of the combined lane line corresponding to the calculation result as a matching object of the linear information of the perceived lane line.
In a specific embodiment, when the calculation result meets a preset matching condition, determining that the line shape information of the combined lane line corresponding to the calculation result is a matching object of the line shape information of the perceived lane line, including the following steps:
In step S116b-1, when the calculation result is the minimum value of all calculation results, determining the line shape information of the combined lane line corresponding to the minimum value as the matching object of the line shape information of the perceived lane line.
The method comprises the steps of matching the linear information of the perceived lane line with the linear information of all combined lane lines, respectively obtaining calculation results, wherein the linear information of the combined lane line related to the minimum value in all calculation results is a matching object of the linear information of the perceived lane line.
For example, the lane lines are the B 1 line and the C 1 line in the first embodiment; the combined lane lines are as follows: the root mean square error calculation results of the corresponding combined lane lines are respectively as follows: 0.25, 0.36, 0.29, 0.21, 0.35, 0.37; and 0.21 is the minimum value in all calculation results, and the linear information of the combined lane line B 2 line and the combined lane line C 2 line related to 0.21 is a matching object of the linear information of the perceived lane line B 1 line and the perceived lane line C 1 line.
The implementation needs to acquire matching calculation results of the line shape information of all the combined lane lines, and then find the minimum value in all the calculation results, so that the matching object of the line shape information of the perceived lane line is determined through the minimum value.
In another specific embodiment, when the calculation result meets a preset matching condition, determining that the line shape information of the combined lane line is a matching object of the line shape information of the perceived lane line includes the following steps:
in step S116a-1, a plurality of historically valid calculation results are obtained.
The valid calculation result is a historical calculation result, and the line shape information of the combined lane line associated with the calculation result is determined as a matching object for sensing the line shape information of the lane line.
Optionally, the valid calculation results of the preset number of values are selected from the valid calculation results of the preset number of values generated before the matching calculation.
Step S116a-2, determining the average value of the effective calculation results as a matching threshold value.
The matching threshold value is the average value of the effective calculation results of the preset value number. The preset value number may include the number of valid calculation results of the ramp in history, or may include the number of valid calculation results of any lane in history, where the valid calculation results may be continuous in time, or may be selected according to a preset rule.
And step S116a-3, when the calculation result is smaller than the matching threshold value, determining that the linear information of the combined lane line corresponding to the calculation result is a matching object of the linear information of the perceived lane line.
In this embodiment, the matching calculation results of the line shape information of all the combined lane lines do not need to be acquired, and as long as the calculation results are smaller than the matching threshold, the matching object of the line shape information of the perceived lane line can be determined. Therefore, the calculated amount is reduced, and the matching efficiency is improved.
In order to obtain a new effective calculation result in the next matching calculation, optionally, when the calculation result meets a preset matching condition, the method further includes:
step S117, determining the calculation result as a valid calculation result.
According to the embodiment of the disclosure, matching calculation is performed through the mutually associated sensing sampling point information and the combined sampling point information, and whether the linear information of the combined lane line is a matching object of the linear information of the sensing lane line is determined through a calculation result. The matching algorithm is simplified, and the matching efficiency is improved.
Example III
Since the embodiments of the present disclosure are further optimized based on the first embodiment and the second embodiment, the explanation based on the same method and the meaning of the same name is the same as the first embodiment, and will not be repeated here.
After embodiments one and two, the method further comprises the steps of:
Step S121, after determining that the line shape information of the combined lane line is a matching object of the line shape information of the perceived lane line, performing yaw calculation on the line shape information of the perceived lane line and the line shape information of the combined lane line, and obtaining a yaw value.
The yaw calculation includes iterative closest points (english full name ITERATIVE CLOSEST POINT, abbreviated ICP). ICP is used for accurate stitching of depth images in computer vision, which is achieved by continuously iterating through the minimisation of the points corresponding to the source and target data. Specifically, the ICP matches the data according to certain geometric characteristics, sets the matching points as imaginary corresponding points, and then solves the motion parameters according to the corresponding relation. And then the data are transformed by using the motion parameters. And determining a new corresponding relation by using the same geometric feature.
The yaw value includes a lateral distance Y and a yaw angle.
And the transverse distance represents the deviation distance between the linear information of the perceived lane line and the linear information of the combined lane line in the horizontal direction.
And the yaw angle is used for indicating the deviation angle of the linear information of the sensing lane line and the linear information of the combined lane line in the horizontal direction.
And carrying out ICP calculation on the linear information of the perceived lane line and the linear information of the combined lane line. For example, a set of perceptual sample point information p i(xi,yi), a set of combined sample point information q j(xj,yj) that matches it, where i and j are each 1,2, … …, N (N is a positive integer); ICP calculation is performed with two sets of sampling points, then the euclidean distance between p i(xi,yi) and q j(xj,yj) is expressed as:
For finding the transformation matrix R and the translation matrix T of the perceptual sample point information p i(xi,yi) and the combined sample point information q j(xj,yj)
qj=R×pi+T+Ni
Spherical solution optimal solution using least squares method:
thus, R and T are obtained when E is the smallest. T is a1 x 2 matrix and the second row in T has a value of the lateral distance Y. R is a 2x 2 matrix, and the sum of all values in R is the yaw angle.
Step S122, correcting the geographic position information based on the yaw value.
After the lateral distance Y and the yaw angle between the line shape information of the perceived lane line and the line shape information of the combined lane line are obtained, the error value of the line shape information of the map lane line can be calculated through the lateral distance Y and the yaw angle, and thus, the geographic position information in the above embodiment can be corrected. Therefore, jump of a positioning result is avoided, and positioning accuracy and robustness are ensured.
Corresponding to the above embodiments provided by the present disclosure, the present disclosure also provides a lane recognition device. The disclosure further provides an embodiment of a device adapted to the above embodiment, which is configured to implement the method steps described in the above embodiment, and the explanation based on the meaning of the same names is the same as that of the above embodiment, which has the same technical effects as those of the above embodiment, and is not repeated herein.
Fig. 3 shows an embodiment of a lane recognition device provided by the present disclosure.
As shown in fig. 3, the present disclosure provides a lane recognition apparatus 300 including:
A perceived-information acquisition unit 301 configured to acquire line-shape information, number, and geographic position information of perceived lane lines;
A map information obtaining unit 302, configured to obtain, based on the geographic location information, line shape information of a map lane line, where the line shape information of the map lane line and the line shape information of the perceived lane line are both information represented by the same type of coordinate system;
a combined information obtaining unit 303, configured to obtain line shape information of each group of combined lane lines from line shape information of the map lane lines, where the number of each group of combined lane lines is the same as the number of the perceived lane lines;
And the matching unit 304 is configured to match the line shape information of the perceived lane line with the line shape information of each group of combined lane lines, and determine a matching object of the line shape information of the perceived lane line.
Optionally, the matching unit 304 includes:
The first acquisition subunit is used for acquiring the sensing sampling point information and the combined sampling point information of each group of combined lane lines which are mutually related, wherein the sensing sampling point information is information of sampling points in the linear information of the sensing lane lines, and the combined sampling point information is information of sampling points in the linear information of each group of combined lane lines;
The matching calculation subunit is used for carrying out matching calculation based on the sensing sampling point information and the combination sampling point information of each group of combination lane lines to obtain calculation results of the corresponding groups of combination lane lines;
And the object determining subunit is used for determining that the linear information of the combined lane line corresponding to the calculation result is a matching object of the linear information of the perceived lane line when the calculation result meets a preset matching condition.
Optionally, the first obtaining subunit includes:
A first determining subunit, configured to determine that abscissa information of the combined sampling point information is the same as abscissa information of the perceived sampling point information;
A second determining subunit, configured to determine ordinate information of the combined sampling point information based on abscissa information of the combined sampling point information and line information of the combined lane line;
and the third determination subunit is used for determining the ordinate information of the sensing sampling point information based on the abscissa information of the sensing sampling point information and the linear information of the sensing lane line.
Optionally, the first obtaining subunit includes:
the ordinate determining subunit is used for determining that the ordinate information of the combined sampling point information is the same as the ordinate information of the perception sampling point information;
A first abscissa determining subunit, configured to determine abscissa information of the combined sampling point information based on ordinate information of the combined sampling point information and line information of the combined lane line;
and the second abscissa determining subunit is used for determining the abscissa information of the sensing sampling point information based on the ordinate information of the sensing sampling point information and the linear information of the sensing lane line.
Optionally, the matching calculation subunit includes:
The first result obtaining subunit is used for calculating root mean square error based on the sensing sampling point information and the combined sampling point information of each group of combined lane lines and obtaining a calculation result of each group of corresponding combined lane lines.
Optionally, the object determining subunit includes:
a second acquisition subunit configured to acquire a plurality of valid calculation results in history, and the linear information of the historically combined lane lines associated with the valid calculation results is determined as a matching object that historically perceives the linear information of the lane lines;
a threshold value obtaining subunit, configured to determine an average value of the valid calculation results as a matching threshold value;
and the fourth determination subunit is used for determining that the linear information of the combined lane line corresponding to the calculation result is a matching object of the linear information of the perceived lane line when the calculation result is smaller than the matching threshold value.
Optionally, the object determining subunit includes:
And a fifth determining subunit, configured to determine, when the calculation result is the minimum value in all calculation results, that the line shape information of the combined lane line corresponding to the minimum value is a matching object of the line shape information of the perceived lane line.
Optionally, the matching unit 304 further includes:
and a sixth determining subunit, configured to determine that the calculation result is a valid calculation result when the calculation result meets a preset matching condition.
Optionally, the apparatus further includes:
the yaw computing unit is used for performing yaw computing on the linear information of the perceived lane line and the linear information of the combined lane line after determining that the linear information of the combined lane line is a matched object of the linear information of the perceived lane line, so as to obtain a yaw value;
and a correction unit configured to correct the geographical position information based on the yaw value.
Optionally, the linear information of the map lane line and the linear information of the perceived lane line are placed in the same coordinate system.
The method comprises the steps of obtaining the linear information of a combined lane line from the linear information of a map lane line, and determining a matching object of the linear information of a perceived lane line by matching the linear information of the combined lane line with the linear information of the perceived lane line. The attribute information and the type information of the sensing lane line do not need to be detected, information loss caused by detection failure is avoided, and the matching effectiveness and accuracy are improved.
Embodiments of the present disclosure provide some embodiments, namely, an electronic device for a lane recognition method, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the one processor to enable the at least one processor to perform the lane recognition method as described in the above embodiments.
Embodiments of the present disclosure provide some embodiments, namely a data processing computer storage medium for lane recognition, the computer storage medium storing computer-executable instructions that are executable to perform the lane recognition method as described in the above embodiments.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (13)

1. A lane recognition method, comprising:
acquiring linear information, quantity and geographic position information of a sensing lane line;
acquiring linear information of a map lane line based on the geographic position information, wherein the linear information of the map lane line and the linear information of the sensing lane line are both information represented by the same type of coordinate system;
Acquiring the linear information of each group of combined lane lines from the linear information of the map lane lines, wherein the number of each group of combined lane lines is the same as that of the perceived lane lines;
matching the linear information of the perceived lane line with the linear information of each group of combined lane lines, and determining a matching object of the linear information of the perceived lane line;
the matching of the line shape information of the perceived lane line with the line shape information of each group of combined lane lines, and the determination of the matching object of the line shape information of the perceived lane line comprises the following steps:
Acquiring the information of the sensing sampling points and the information of the combined sampling points of each group of combined lane lines which are mutually related, wherein the information of the sensing sampling points is the information of the sampling points in the linear information of the sensing lane lines, and the information of the combined sampling points is the information of the sampling points in the linear information of each group of combined lane lines;
based on the sensing sampling point information and the combination sampling point information of each group of combination lane lines, carrying out matching calculation to obtain calculation results of the corresponding groups of combination lane lines;
When the calculation result meets a preset matching condition, determining that the linear information of the combined lane line corresponding to the calculation result is a matching object of the linear information of the perceived lane line;
the obtaining the inter-related sensing sampling point information and the combined sampling point information comprises the following steps:
Determining that the abscissa information and the ordinate information of the combined sampling point information are the same as the abscissa information and the ordinate information of the sensing sampling point information;
based on the linear information of the combined lane line and the abscissa information or the ordinate information of the combined sampling point information, respectively determining the ordinate information or the abscissa information of the combined sampling point information;
And determining the ordinate information or the abscissa information of the sensing sampling point information based on the linear information of the sensing lane line and the abscissa information or the ordinate information of the sensing sampling point information respectively.
2. The method according to claim 1, wherein the performing a matching calculation based on the perceived sampling point information and the combined sampling point information of each set of combined lane lines, and obtaining a calculation result of each set of combined lane lines, includes:
And calculating root mean square error based on the sensing sampling point information and the combined sampling point information of each group of combined lane lines, and obtaining a calculation result of each group of corresponding combined lane lines.
3. The method according to claim 1, wherein when the calculation result satisfies a preset matching condition, determining that the line shape information of the combined lane line corresponding to the calculation result is a matching object of the line shape information of the perceived lane line includes:
Acquiring a plurality of effective calculation results in history, wherein the linear information of the combined lane lines in history, which is associated with the effective calculation results, is determined as a matching object for sensing the linear information of the lane lines in history;
Determining the average value of the effective calculation results as a matching threshold value;
And when the calculation result is smaller than the matching threshold value, determining that the linear information of the combined lane line corresponding to the calculation result is a matching object of the linear information of the perceived lane line.
4. The method according to claim 1, wherein when the calculation result satisfies a preset matching condition, determining that the line shape information of the combined lane line corresponding to the calculation result is a matching object of the line shape information of the perceived lane line includes:
and when the calculation result is the minimum value in all calculation results, determining the linear information of the combined lane line corresponding to the minimum value as a matching object of the linear information of the perceived lane line.
5. The method according to claim 1, wherein when the calculation result satisfies a preset matching condition, further comprising:
and determining the calculation result as a valid calculation result.
6. The method according to claim 1, wherein the method further comprises:
after determining that the linear information of the combined lane line is a matching object of the linear information of the perceived lane line, performing yaw calculation on the linear information of the perceived lane line and the linear information of the combined lane line to obtain a yaw value;
The geographic position information is corrected based on the yaw value.
7. The method of any of claims 1-6, wherein the linear information of the map lane lines is placed in the same coordinate system as the linear information of the perceived lane lines.
8. A lane recognition device, characterized by comprising:
The sensing information acquisition unit is used for acquiring linear information, quantity and geographic position information of the sensing lane lines;
The map information acquisition unit is used for acquiring the linear information of the map lane line based on the geographic position information, wherein the linear information of the map lane line and the linear information of the perceived lane line are both information represented by the same type of coordinate system;
A combined information obtaining unit, configured to obtain line shape information of each group of combined lane lines from line shape information of the map lane lines, where the number of each group of combined lane lines is the same as the number of the perceived lane lines;
the matching unit is used for matching the linear information of the sensing lane lines with the linear information of each group of combined lane lines and determining a matching object of the linear information of the sensing lane lines;
the matching unit includes:
The first acquisition subunit is used for acquiring the sensing sampling point information and the combined sampling point information of each group of combined lane lines which are mutually related, wherein the sensing sampling point information is information of sampling points in the linear information of the sensing lane lines, and the combined sampling point information is information of sampling points in the linear information of each group of combined lane lines;
The matching calculation subunit is used for carrying out matching calculation based on the sensing sampling point information and the combination sampling point information of each group of combination lane lines to obtain calculation results of the corresponding groups of combination lane lines;
an object determining subunit, configured to determine that, when the calculation result meets a preset matching condition, the line shape information of the combined lane line corresponding to the calculation result is a matching object of the line shape information of the perceived lane line;
The first acquisition subunit includes:
The first determining subunit is used for determining that the abscissa information and the ordinate information of the combined sampling point information are the same as the abscissa information and the ordinate information of the sensing sampling point information;
A second determining subunit, configured to determine, based on the line shape information of the combined lane line and the abscissa information or the ordinate information of the combined sampling point information, the ordinate information or the abscissa information of the combined sampling point information, respectively;
And the third determination subunit is used for respectively determining the ordinate information or the abscissa information of the sensing sampling point information based on the linear information of the sensing lane line and the abscissa information or the ordinate information of the sensing sampling point information.
9. The apparatus of claim 8, wherein the match computation subunit comprises:
The first result obtaining subunit is used for calculating root mean square error based on the sensing sampling point information and the combined sampling point information of each group of combined lane lines and obtaining a calculation result of each group of corresponding combined lane lines.
10. The apparatus of claim 8, wherein the object determination subunit comprises:
a second acquisition subunit configured to acquire a plurality of valid calculation results in history, and the linear information of the historically combined lane lines associated with the valid calculation results is determined as a matching object that historically perceives the linear information of the lane lines;
a threshold value obtaining subunit, configured to determine an average value of the valid calculation results as a matching threshold value;
and the fourth determination subunit is used for determining that the linear information of the combined lane line corresponding to the calculation result is a matching object of the linear information of the perceived lane line when the calculation result is smaller than the matching threshold value.
11. The apparatus of claim 8, wherein the object determination subunit comprises:
And a fifth determining subunit, configured to determine, when the calculation result is the minimum value in all calculation results, that the line shape information of the combined lane line corresponding to the minimum value is a matching object of the line shape information of the perceived lane line.
12. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any one of claims 1 to 7.
13. An electronic device, comprising:
one or more processors;
Storage means for storing one or more programs which when executed by the one or more processors cause the one or more processors to implement the method of any of claims 1 to 7.
CN202110413035.8A 2021-04-16 2021-04-16 Lane recognition method and device, medium and electronic equipment Active CN112949609B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110413035.8A CN112949609B (en) 2021-04-16 2021-04-16 Lane recognition method and device, medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110413035.8A CN112949609B (en) 2021-04-16 2021-04-16 Lane recognition method and device, medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN112949609A CN112949609A (en) 2021-06-11
CN112949609B true CN112949609B (en) 2024-05-28

Family

ID=76232863

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110413035.8A Active CN112949609B (en) 2021-04-16 2021-04-16 Lane recognition method and device, medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN112949609B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114689044A (en) * 2022-03-28 2022-07-01 重庆长安汽车股份有限公司 Fusion positioning system and method for dealing with failure scene of global navigation satellite system
CN115147789B (en) * 2022-06-16 2023-04-18 禾多科技(北京)有限公司 Method, device, equipment and computer readable medium for detecting split and combined road information

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105444770A (en) * 2015-12-18 2016-03-30 上海交通大学 Intelligent mobile phone-based lane grade map generating and positioning system and method
CN107643086A (en) * 2016-07-22 2018-01-30 北京四维图新科技股份有限公司 A kind of vehicle positioning method, apparatus and system
CN110567480A (en) * 2019-09-12 2019-12-13 北京百度网讯科技有限公司 Optimization method, device and equipment for vehicle positioning and storage medium
CN111380538A (en) * 2018-12-28 2020-07-07 沈阳美行科技有限公司 Vehicle positioning method, navigation method and related device
CN111932887A (en) * 2020-08-17 2020-11-13 武汉四维图新科技有限公司 Method and equipment for generating lane-level track data
CN112166059A (en) * 2018-05-25 2021-01-01 Sk电信有限公司 Position estimation device for vehicle, position estimation method for vehicle, and computer-readable recording medium storing computer program programmed to execute the method
CN112560680A (en) * 2020-12-16 2021-03-26 北京百度网讯科技有限公司 Lane line processing method and device, electronic device and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10896334B2 (en) * 2018-11-28 2021-01-19 Here Global B.V. Method and system of a machine learning model for detection of physical dividers

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105444770A (en) * 2015-12-18 2016-03-30 上海交通大学 Intelligent mobile phone-based lane grade map generating and positioning system and method
CN107643086A (en) * 2016-07-22 2018-01-30 北京四维图新科技股份有限公司 A kind of vehicle positioning method, apparatus and system
CN112166059A (en) * 2018-05-25 2021-01-01 Sk电信有限公司 Position estimation device for vehicle, position estimation method for vehicle, and computer-readable recording medium storing computer program programmed to execute the method
CN111380538A (en) * 2018-12-28 2020-07-07 沈阳美行科技有限公司 Vehicle positioning method, navigation method and related device
CN110567480A (en) * 2019-09-12 2019-12-13 北京百度网讯科技有限公司 Optimization method, device and equipment for vehicle positioning and storage medium
CN111932887A (en) * 2020-08-17 2020-11-13 武汉四维图新科技有限公司 Method and equipment for generating lane-level track data
CN112560680A (en) * 2020-12-16 2021-03-26 北京百度网讯科技有限公司 Lane line processing method and device, electronic device and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Map matching for vehicle localization based on serial LiDAR sensors;Schlichting A等;《2019 IEEE Intelligent Transportation Systems Conference (ITSC)》;20191231;1257-1262 *
基于粒子滤波的行车轨迹路网匹配方法;郑诗晨等;《地球信息科学学报》;20201231;第22卷(第11期);2109-2117 *
基于视觉与地图的车道信息检测方法;欧科君;《中国优秀硕士学位论文全文数据库 (工程科技Ⅱ辑)》;20190715(第7期);C035-154 *

Also Published As

Publication number Publication date
CN112949609A (en) 2021-06-11

Similar Documents

Publication Publication Date Title
EP3637371B1 (en) Map data correcting method and device
CN112949609B (en) Lane recognition method and device, medium and electronic equipment
EP3517997A1 (en) Method and system for detecting obstacles by autonomous vehicles in real-time
CN112017251B (en) Calibration method and device, road side equipment and computer readable storage medium
US10909395B2 (en) Object detection apparatus
CN111830953A (en) Vehicle self-positioning method, device and system
WO2018180338A1 (en) Information processing device, server device, control method, program, and storage medium
EP3690814B1 (en) Lane recognition device
US12055413B2 (en) Apparatus and method for updating detailed map
EP4105829A1 (en) Lane line determination method and system, vehicle, and storage medium
JP2020026985A (en) Vehicle position estimation device and program
CN114332225A (en) Lane line matching positioning method, electronic device and storage medium
CN112466147B (en) Multi-sensor-based library position detection method and related device
KR102100047B1 (en) Method for position recognition of vehicle using lane-end-point detection algorithm and method for evaluating performance of the same
CN112633035A (en) Driverless vehicle-based lane line coordinate true value acquisition method and device
CN114067556B (en) Environment sensing method, device, server and readable storage medium
CN113544034B (en) Apparatus and method for determining correction information of a vehicle sensor
CN114067555A (en) Registration method and device for data of multiple base stations, server and readable storage medium
WO2021063756A1 (en) Improved trajectory estimation based on ground truth
CN116256781A (en) Combined characteristic multi-source satellite information track association method and system
US11477371B2 (en) Partial image generating device, storage medium storing computer program for partial image generation and partial image generating method
CN115773747A (en) High-precision map generation method, device, equipment and storage medium
CN115661589A (en) Method and device for evaluating fusion perception algorithm, storage medium and vehicle
CN115507752A (en) Monocular vision distance measurement method and system based on parallel environment elements
EP4296711A1 (en) Automatic extrinsic calibration and calibration validation of different sensor modalities, e.g camera, radar and lidar sensors

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant