CN112949609A - Lane recognition method, device, medium and electronic equipment - Google Patents

Lane recognition method, device, medium and electronic equipment Download PDF

Info

Publication number
CN112949609A
CN112949609A CN202110413035.8A CN202110413035A CN112949609A CN 112949609 A CN112949609 A CN 112949609A CN 202110413035 A CN202110413035 A CN 202110413035A CN 112949609 A CN112949609 A CN 112949609A
Authority
CN
China
Prior art keywords
information
combined
lane line
linear
sampling point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110413035.8A
Other languages
Chinese (zh)
Inventor
夏靖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing CHJ Automotive Information Technology Co Ltd
Original Assignee
Beijing CHJ Automotive Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing CHJ Automotive Information Technology Co Ltd filed Critical Beijing CHJ Automotive Information Technology Co Ltd
Priority to CN202110413035.8A priority Critical patent/CN112949609A/en
Publication of CN112949609A publication Critical patent/CN112949609A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Abstract

The disclosure provides a lane recognition method, a lane recognition device, a lane recognition medium, and an electronic apparatus. The method comprises the steps of obtaining linear information of a combined lane line from the linear information of the map lane line, matching the linear information of the combined lane line with the linear information of a perception lane line, and determining a matching object of the linear information of the perception lane line. Attribute information and type information of the sensing lane line do not need to be detected, information loss caused by detection failure is avoided, and effectiveness and accuracy of matching are improved.

Description

Lane recognition method, device, medium and electronic equipment
Technical Field
The present disclosure relates to the field of artificial intelligence, and in particular, to a lane recognition method, apparatus, medium, and electronic device.
Background
The automatic driving automobile depends on the cooperation of artificial intelligence, visual calculation, sensor, monitoring device and global positioning system, so that the automobile controller can automatically and safely operate the motor vehicle without any artificial active operation.
The visual perception refers to object information formed in the mind after the human visual organ distinguishes an object. In the field of artificial intelligence, visual perception is object information obtained after distinguishing acquired images.
When an automatic driving automobile runs at a highway ramp junction, the attribute information and the type information of a perception lane line need to be detected in the visual perception output of the automatic auxiliary navigation driving function, and the lane line is represented by a third-order parabola. And then a series of calculations are carried out by utilizing the third-order parabola to obtain a matching pair of the perception lane line and the map lane line. And then, the corrected transverse distance and yaw angle are obtained through the matching pair, so that the navigation positioning value is corrected. However, in a complex scene, the above method often cannot correctly detect the attribute information and the type information of the lane line, and thus cannot obtain the matching relationship between the sensing lane line and the map lane line, thereby directly causing the jump of the automobile positioning result. The higher false detection rate of the ramp affects the safety of the automatic driving automobile.
Disclosure of Invention
An object of the present disclosure is to provide a lane recognition method, apparatus, medium, and electronic device, which can solve at least one of the above-mentioned technical problems. The specific scheme is as follows:
according to a specific embodiment of the present disclosure, in a first aspect, the present disclosure provides a lane recognition method, including:
acquiring linear information, quantity and geographical position information of a perception lane line;
acquiring the linear information of a map lane line based on the geographic position information, wherein the linear information of the map lane line and the linear information of the perception lane line are represented by the same type of coordinate system;
acquiring the linear information of each group of combined lane lines from the linear information of the map lane lines, wherein the number of the combined lane lines in each group is the same as that of the perception lane lines;
and matching the linear information of the perception lane line with the linear information of each group of combined lane lines to determine a matching object of the linear information of the perception lane line.
Optionally, the matching the linear information of the perception lane line with the linear information of each group of combined lane lines to determine a matching object of the linear information of the perception lane line includes:
acquiring mutually-correlated sensing sampling point information and combined sampling point information of each group of combined lane lines, wherein the sensing sampling point information is information of a sampling point in linear information of the sensing lane lines, and the combined sampling point information is information of a sampling point in linear information of each group of combined lane lines;
performing matching calculation based on the sensing sampling point information and the combined sampling point information of each group of combined lane lines to obtain a calculation result corresponding to each group of combined lane lines;
and when the calculation result meets a preset matching condition, determining the linear information of the combined lane line corresponding to the calculation result as a matching object of the linear information of the perception lane line.
Optionally, the obtaining of the correlated sensing sampling point information and combined sampling point information includes:
determining that the abscissa information of the combined sampling point information is the same as the abscissa information of the sensing sampling point information;
determining the ordinate information of the combined sampling point information based on the abscissa information of the combined sampling point information and the linear information of the combined lane line;
and determining the ordinate information of the sensing sampling point information based on the abscissa information of the sensing sampling point information and the linear information of the sensing lane line.
Optionally, the obtaining of the correlated sensing sampling point information and combined sampling point information includes:
determining that the ordinate information of the combined sampling point information is the same as the ordinate information of the sensing sampling point information;
determining the abscissa information of the combined sampling point information based on the ordinate information of the combined sampling point information and the linear information of the combined lane line;
and determining the abscissa information of the sensing sampling point information based on the ordinate information of the sensing sampling point information and the linear information of the sensing lane line.
Optionally, the performing matching calculation based on the sensing sampling point information and the combined sampling point information of each group of combined lane lines to obtain the calculation result corresponding to each group of combined lane lines includes:
and calculating the root mean square error based on the information of the sensing sampling points and the information of the combined sampling points of each group of combined lane lines, and acquiring the calculation result corresponding to each group of combined lane lines.
Optionally, when the calculation result meets a preset matching condition, determining that the linear information of the combined lane line corresponding to the calculation result is a matching object of the linear information of the perceived lane line, including:
obtaining a plurality of historical effective calculation results, wherein the linear information of the historical combined lane line associated with the effective calculation results is determined as a matching object of the linear information of the historical perception lane line;
determining an average value of the valid calculation results as a matching threshold;
and when the calculation result is smaller than the matching threshold, determining that the linear information of the combined lane line corresponding to the calculation result is a matching object of the linear information of the perception lane line.
Optionally, when the calculation result meets a preset matching condition, determining that the linear information of the combined lane line corresponding to the calculation result is a matching object of the linear information of the perceived lane line, including:
and when the calculation result is the minimum value in all the calculation results, determining the linear information of the combined lane line corresponding to the minimum value as a matching object of the linear information of the perception lane line.
Optionally, when the calculation result meets a preset matching condition, the method further includes:
and determining the calculation result as a valid calculation result.
Optionally, the method further includes:
when the linear information of the combined lane line is determined to be a matching object of the linear information of the perception lane line, performing yaw calculation on the linear information of the perception lane line and the linear information of the combined lane line to obtain a yaw value;
correcting the geographic position information based on the yaw value.
Optionally, the line-shaped information of the map lane line and the line-shaped information of the perception lane line are placed in the same coordinate system.
According to a second aspect thereof, the present disclosure provides a lane recognition apparatus including:
the perception information acquisition unit is used for acquiring linear information, the number and the geographical position information of the perception lane lines;
the map information acquisition unit is used for acquiring the linear information of a map lane line based on the geographic position information, and the linear information of the map lane line and the linear information of the perception lane line are represented by the same type of coordinate system;
the combined information acquisition unit is used for acquiring the linear information of each group of combined lane lines from the linear information of the map lane lines, and the number of the combined lane lines in each group is the same as that of the perception lane lines;
and the matching unit is used for matching the linear information of the perception lane line with the linear information of each group of combined lane lines and determining a matching object of the linear information of the perception lane line.
Optionally, the matching unit includes:
the first acquisition subunit is used for acquiring mutually-correlated sensing sampling point information and combined sampling point information of each group of combined lane lines, wherein the sensing sampling point information is information of sampling points in the linear information of the sensing lane lines, and the combined sampling point information is information of sampling points in the linear information of each group of combined lane lines;
the matching calculation subunit is used for performing matching calculation based on the sensing sampling point information and the combined sampling point information of each group of combined lane lines to obtain a calculation result corresponding to each group of combined lane lines;
and the object determining subunit is used for determining the linear information of the combined lane line corresponding to the calculation result as a matching object of the linear information of the perception lane line when the calculation result meets a preset matching condition.
Optionally, the first obtaining subunit includes:
the first determining subunit is used for determining that the abscissa information of the combined sampling point information is the same as the abscissa information of the perceived sampling point information;
the second determining subunit is used for determining the ordinate information of the combined sampling point information based on the abscissa information of the combined sampling point information and the linear information of the combined lane line;
and the third determining subunit is used for determining the ordinate information of the sensing sampling point information based on the abscissa information of the sensing sampling point information and the linear information of the sensing lane line.
Optionally, the first obtaining subunit includes:
the vertical coordinate determining subunit is used for determining that the vertical coordinate information of the combined sampling point information is the same as the vertical coordinate information of the sensing sampling point information;
the first abscissa determining subunit is used for determining the abscissa information of the combined sampling point information based on the ordinate information of the combined sampling point information and the linear information of the combined lane line;
and the second abscissa determining subunit is used for determining the abscissa information of the sensing sampling point information based on the ordinate information of the sensing sampling point information and the linear information of the sensing lane line.
Optionally, the matching calculation subunit includes:
and the first result acquisition subunit is used for calculating the root mean square error based on the information of the sensing sampling points and the information of the combined sampling points of each group of combined lane lines, and acquiring the calculation result corresponding to each group of combined lane lines.
Optionally, the object determining subunit includes:
a second acquisition subunit configured to acquire historically multiple valid calculation results, the alignment information of the historically combined lane line associated with the valid calculation results being determined as a matching object of the alignment information of the historically sensed lane line;
a threshold obtaining subunit, configured to determine that an average value of the effective calculation results is a matching threshold;
and the fourth determining subunit is configured to determine, when the calculation result is smaller than the matching threshold, that the linear information of the combined lane line corresponding to the calculation result is a matching object of the linear information of the perceived lane line.
Optionally, the object determining subunit includes:
and the fifth determining subunit is configured to determine, when the calculation result is the minimum value among all the calculation results, that the linear information of the combined lane line corresponding to the minimum value is a matching object of the linear information of the perceived lane line.
Optionally, the matching unit further includes:
and the sixth determining subunit is configured to determine that the calculation result is an effective calculation result when the calculation result meets a preset matching condition.
Optionally, the apparatus further comprises:
the yaw calculation unit is used for performing yaw calculation on the linear information of the perception lane line and the linear information of the combined lane line to acquire a yaw value after determining that the linear information of the combined lane line is a matching object of the linear information of the perception lane line;
a correction unit for correcting the geographical position information based on the yaw value.
Optionally, the line-shaped information of the map lane line and the line-shaped information of the perception lane line are placed in the same coordinate system.
According to a third aspect, the present disclosure provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the lane recognition method according to any one of the first aspect.
According to a fourth aspect thereof, the present disclosure provides an electronic device, comprising: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the lane recognition method according to any one of the first aspect.
Compared with the prior art, the scheme of the embodiment of the disclosure at least has the following beneficial effects:
the method comprises the steps of obtaining linear information of a combined lane line from the linear information of the map lane line, matching the linear information of the combined lane line with the linear information of a perception lane line, and determining a matching object of the linear information of the perception lane line. Attribute information and type information of the sensing lane line do not need to be detected, information loss caused by detection failure is avoided, and effectiveness and accuracy of matching are improved.
Drawings
Fig. 1 shows a flow chart of a lane identification method according to an embodiment of the present disclosure;
fig. 2 illustrates line shape information of a lane line in a lane recognition method according to an embodiment of the present disclosure;
fig. 3 shows a block diagram of the elements of a lane recognition device according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
Alternative embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
Some embodiments provided for the present disclosure are embodiments of a lane identification method.
Example one
The embodiments of the present disclosure are described in detail below with reference to fig. 1.
And step S101, acquiring linear information, quantity and geographical position information of the sensing lane line.
The vehicle is in the lane, and the camera that sets up through the plantago gathers lane image, includes the image of lane line in lane image. The lane line is perceived as a lane line image that can be recognized from the lane image. In the field of artificial intelligence, the line-shaped information of a perception lane line is used for representing a lane line image in a lane image. The disclosed embodiments determine the exact location of the vehicle in the road by the alignment information of the important perceived lane lines associated with the vehicle.
Due to the limitation of the lane image, the number of perceived lane lines that can be recognized is not equal to the number of lane line images in the lane image, for example, as shown in fig. 2, in the lane image, including: a. the1Wire, B1Wire, C1Line and D1Line, B1Line and C1The inter-line is a lane where the vehicle runs, and after the lane image is visually perceived, the obtained perceived lane line only comprises B on two sides of the lane where the vehicle runs1Line and C1A wire; sensing the line shape information of the lane line, namely B1Line shape information of line and C1Line shape information of the lines, and therefore, the number of perceived lane lines is 2.
Generally, the line shape information of the lane line is represented by a third-order parabola, namely:
y=Ax3+Bx2+Cx+D;
wherein a is expressed as a curvature derivative;
b is represented as a curvature;
c represents a yaw angle;
d is represented as an offset;
x and y represent coordinate points of the linearity information.
The geographical location information is usually obtained by a satellite positioning system (e.g., GPS system, beidou system, and glonass system) or a mobile communication system.
And step S102, acquiring the linear information of the map lane line based on the geographic position information.
The map information database stores the corresponding relation between the geographic position information and the original linear information of the map lane line. The linear information of the map lane line can be obtained from the map information database through the geographic position information. The line shape information of the map lane line is information represented by a terrestrial coordinate system, and the line shape information of the perception lane line acquired by the vehicle is information represented by a vehicle body coordinate system (for example, the vehicle body is the origin of the coordinate system). For convenience of matching, the line-shaped information of the map lane line and the line-shaped information of the sensing lane line may both adopt information represented by a coordinate system of the same type, and further, the line-shaped information of the map lane line and the line-shaped information of the sensing lane line are both information represented by a global coordinate system, or information represented by a vehicle body coordinate system, or information represented by other coordinate systems, which is not limited in the embodiment of the disclosure.
In order to facilitate matching of the linear information of the perception lane line and the linear information of the combined lane line, optionally, the linear information of the map lane line and the linear information of the perception lane line are placed in the same coordinate system, as shown in fig. 2.
And step S103, acquiring the linear information of each group of combined lane lines from the linear information of the map lane lines.
Each group of combined lane lines comprises a plurality of combinations of the map lane lines, and the number of the combined lane lines in each group is the same as that of the sensing lane lines.
For example, as shown in fig. 2, the map lane line includes 4 lines: a. the2Wire, B2Wire, C2Line and D2A wire; if the perceived lane line includes 2 lines: b is1Line and C1Line, 6 groups of combined lane lines consisting of 2 map lane lines can be obtained from the map lane lines: a. the2Line and B2Wire, A2Line and C2Wire, A2Line and D2Wire, B2Line and C2Wire, B2Line and D2Wire, C2Line and D2A wire.
It is an object of the embodiments of the present disclosure to obtain line shape information of a group of combined lane lines that matches line shape information of a perceived lane line from among line shape information of a group of combined lane lines. It is understood that the lane line on which the vehicle travels matches the set of combined lane lines.
And step S104, matching the linear information of the perception lane line with the linear information of each group of combined lane lines, and determining a matching object of the linear information of the perception lane line.
The embodiment of the disclosure determines the matching object of the linear information of the perception lane line by matching the linear information of the combination lane line with the linear information of the perception lane line according to the linear information of the combination lane line acquired from the linear information of the map lane line. Attribute information and type information of the sensing lane line do not need to be detected, information loss caused by detection failure is avoided, and effectiveness and accuracy of matching are improved.
Example two
Since the embodiment of the present disclosure is further optimized based on the first embodiment, the explanation based on the same method and the same name meaning is the same as the first embodiment, and will not be described herein again.
And step S111, acquiring linear information, quantity and geographical position information of the sensing lane line.
And step S112, acquiring the linear information of the map lane line based on the geographic position information.
The linear information of the map lane line and the linear information of the perception lane line are represented by the same type of coordinate system.
Optionally, the line-shaped information of the map lane line and the line-shaped information of the perception lane line are placed in the same coordinate system.
In step S113, the alignment information of each group of combination lane lines is obtained from the alignment information of the map lane lines.
Each group of combined lane lines comprises a plurality of combinations of the map lane lines, and the number of the combined lane lines in each group is the same as that of the sensing lane lines;
and step S114, obtaining the mutually correlated sensing sampling point information and the combined sampling point information of each group of combined lane lines.
The sensing sampling point information is sampling point information in the linear information of the sensing lane lines, and the combined sampling point information is sampling point information in the linear information of each group of combined lane lines.
The correlated sensing sampling point information and the combined sampling point information of each group of combined lane lines can be understood as that n pieces of combined sampling point information are obtained from the linear information of each group of combined lane lines, and then n pieces of correlated sensing sampling point information which appears in pairs with the combined sampling point information exist in the linear information of the sensing lane lines. Similarly, n pieces of sensing sampling point information are obtained from the linear information of the sensing lane line, and n pieces of combined sampling point information which appears in pairs with the sensing sampling point information and is mutually associated exist in the linear information of each group of combined lane lines.
For example, as shown in fig. 2, the map lane line includes 4 lines: a. the2Wire, B2Wire, C2Line and D2A wire; if the perceived lane line includes 2 lines: b is1Line and C1Line, 6 groups of combined lane lines consisting of 2 map lane lines can be obtained from the map lane lines: a. the2Line and B2Wire, A2Line and C2Wire, A2Line and D2Wire, B2Line and C2Wire, B2Line and D2Wire, C2Line and D2A wire; wherein, a group of combined lane lines B2Line and C2Wire, B thereof2The combined sampling point information of the line is (x)1,y1) Point at the perception lane line B1In-line with (x)1,y1) The mutual correlation perception sampling point information of the point pair is (x)1,y2) Point, two sampling point information have the same x-axis coordinate; by the method canAnd acquiring the mutually associated sensing sampling point information and combined sampling point information corresponding to each group of combined lane lines.
On any group of combined lane lines, the more combined sampling point information is acquired, which means that the more perceptual sampling point information is associated with each other, so that the matching accuracy is higher.
In consideration of the similarity of step S114, the present disclosure only describes one specific embodiment, and the obtaining of correlated perceptual sampling point information and combined sampling point information includes the following steps:
step S114a-1, determining that the abscissa information of the combined sampling point information is the same as the abscissa information of the sensed sampling point information.
For example, as shown in fig. 2, the abscissa information of the combined sampling point information and the abscissa information of the sensed sampling point information are both x1
Step S114a-2, determining ordinate information of the combined sample point information based on the abscissa information of the combined sample point information and the alignment information of the combined lane line.
For example, as shown in fig. 2, continuing the above example, the line shape information of the combination lane line is represented as:
y1=A1x1 3+B1x1 2+C1x1+D1
wherein A is1A curvature derivative in the linear information expressed as a combined lane line;
B1curvature in the linear information expressed as a combined lane line;
C1a yaw angle in the linear information expressed as a combined lane line;
D1an offset amount in the linear information expressed as a combined lane line;
x1and y1A coordinate point representing linear information of the combined lane line;
the abscissa information x of the combined sampling point information is measured1The linear information formula of the combined lane line is brought in,acquiring the vertical coordinate information y of the combined sampling point information1
Step S114a-3, determining ordinate information of the sensing sampling point information based on the abscissa information of the sensing sampling point information and the linear information of the sensing lane line.
For example, as shown in fig. 2, continuing the above example, the line shape information of the perceived lane line is represented as:
y1=A2x1 3+B2x1 2+C2x1+D2
wherein A is2Curvature derivatives in the linear information expressed as perceived lane lines;
B2curvature in the linear information expressed as a perceived lane line;
C2a yaw angle in the linear information expressed as a perceived lane line;
D2an offset in the linear information expressed as a perceived lane line;
x2and y2A coordinate point representing linear information of the perception lane line;
the abscissa information x of the sensing sampling point information is measured2The linear information formula of the perception lane line is brought in to obtain the vertical coordinate information y of the perception sampling point information2
Or, in another specific embodiment, the obtaining of the correlated perceptual sampling point information and combined sampling point information includes the following steps:
and step S114b-1, determining that the ordinate information of the combined sampling point information is the same as the ordinate information of the perceived sampling point information.
Step S114b-2, determining abscissa information of the combined sample point information based on the ordinate information of the combined sample point information and the alignment information of the combined lane line.
Step S114b-3, determining abscissa information of the sensing sampling point information based on the ordinate information of the sensing sampling point information and the linear information of the sensing lane line.
The sensing sampling point information and the combined sampling point information corresponding to any group of combined lane lines can be obtained through the specific embodiment.
And step S115, performing matching calculation based on the sensing sampling point information and the combined sampling point information of each group of combined lane lines to obtain a calculation result corresponding to each group of combined lane lines.
Specifically, the method comprises the following steps:
and S115-1, calculating a root mean square error based on the sensing sampling point information and the combined sampling point information of each group of combined lane lines, and acquiring a calculation result corresponding to each group of combined lane lines.
The Root Mean Square Error (RMSE) is also called the standard Error and is defined as: i is 1, 2, 3, … … n. In a limited number of measurements, the root mean square error is often expressed by the following equation:
Figure BDA0003024845460000101
wherein: n represents the number of the sensing sampling point information and the combined sampling point information which appear in pairs and are mutually associated corresponding to each group of combined lane lines;
di represents the deviation of the correlated sensing sampling point information and combined sampling point information corresponding to each group of combined lane lines;
RMSE represents the calculation results for each group of combined lane lines.
If the root mean square error statistical distribution is normal, the probability that the random error falls within ± σ is 68%.
And step S116, when the calculation result meets the preset matching condition, determining the linear information of the combined lane line corresponding to the calculation result as a matching object of the linear information of the perception lane line.
In a specific embodiment, when the calculation result satisfies a preset matching condition, determining that the alignment information of the combined lane line corresponding to the calculation result is a matching object of the alignment information of the perceived lane line includes the following steps:
step S116b-1, when the calculation result is the minimum value among all the calculation results, determining that the line shape information of the combined lane line corresponding to the minimum value is a matching object of the line shape information of the perceived lane line.
That is, the linear information of the perception lane line is matched with the linear information of all the combined lane lines, the calculation results are respectively obtained, and the linear information of the combined lane line related to the minimum value in all the calculation results is a matching object of the linear information of the perception lane line.
For example, in the first embodiment, the sensing lane line is B1Line and C1A wire; the combined lane line is as follows: a. the2Line and B2Wire, A2Line and C2Wire, A2Line and D2Wire, B2Line and C2Wire, B2Line and D2Wire, C2Line and D2And the calculation results of the root mean square errors corresponding to the combined lane lines are respectively as follows: 0.25, 0.36, 0.29, 0.21, 0.35, 0.37; 0.21 is the minimum value of all the calculation results, the combined lane line B related to 0.212Line and C2The linear information of the line is a perception lane line B1Line and C1Matching objects of line shape information of the line.
In the specific implementation, the matching calculation results of the linear information of all the combined lane lines need to be obtained, and then the minimum value is found in all the calculation results, so that the matching object of the linear information of the perception lane line is determined through the minimum value.
In another specific embodiment, when the calculation result satisfies a preset matching condition, determining that the alignment information of the combined lane line is a matching object of the alignment information of the perceived lane line includes the following steps:
step S116a-1, a plurality of historically valid calculation results are obtained.
The valid calculation result refers to a historical calculation result, and the alignment information of the combined lane line associated with the calculation result is determined as a matching object of the alignment information of the perceived lane line.
Optionally, the valid calculation result of the historical preset value quantity selects the valid calculation result of the preset value quantity generated before the matching calculation.
Step S116a-2, determining the average of the valid calculation results as the matching threshold.
The matching threshold is the average value of the effective calculation results of the preset value number. The preset value quantity can include the quantity of effective calculation results of ramps in history, and also can include the quantity of effective calculation results of any lanes in history, and the effective calculation results can be continuous in time or selected according to a preset rule.
Step S116a-3, when the calculation result is smaller than the matching threshold, determining that the linear information of the combined lane line corresponding to the calculation result is a matching object of the linear information of the perceived lane line.
In this embodiment, it is not necessary to obtain the matching calculation results of the linear information of all the combined lane lines, and the matching object of the linear information of the perceived lane line may be determined as long as the calculation result is smaller than the matching threshold. Therefore, the calculation amount is reduced, and the matching efficiency is improved.
In order to obtain a new effective calculation result in the next matching calculation, optionally, when the calculation result meets a preset matching condition, the method further includes:
step S117, determining the calculation result as a valid calculation result.
The embodiment of the disclosure performs matching calculation through the correlated sensing sampling point information and combined sampling point information, and determines whether the linear information of the combined lane line is a matching object of the linear information of the sensing lane line through a calculation result. The matching algorithm is simplified, and the matching efficiency is improved.
EXAMPLE III
Since the embodiment of the present disclosure is further optimized based on the first embodiment and the second embodiment, the explanation based on the same method and the same name and meaning is the same as that of the first embodiment, and will not be described herein again.
After the first and second embodiments, the method further comprises the steps of:
step S121, after the linear information of the combined lane line is determined to be a matching object of the linear information of the perception lane line, performing yaw calculation on the linear information of the perception lane line and the linear information of the combined lane line to obtain a yaw value.
The yaw calculation includes an Iterative Closest Point (ICP for short). The ICP is used for accurate splicing of depth images in computer vision, and accurate splicing is achieved by continuously iterating and minimizing corresponding points of source data and target data. Specifically, the ICP matches the data according to a certain geometric characteristic, sets the matching points as imaginary corresponding points, and then solves the motion parameters according to the correspondence. And then the data is transformed by using the motion parameters. And determining a new corresponding relation by using the same geometric characteristics.
The yaw value includes a lateral distance Y and a yaw angle.
And the transverse distance represents the deviation distance of the linear information of the perception lane line and the linear information of the combined lane line in the horizontal direction.
And a yaw angle representing a deviation angle of the line-shaped information of the perception lane line and the line-shaped information of the combined lane line in the horizontal direction.
And carrying out ICP calculation on the linear information of the perception lane line and the linear information of the combined lane line. For example, a set of perceptual sample point information pi(xi,yi) A set of combined sample point information q matched theretoj(xj,yj) Wherein i and j are both 1, 2, … …, N (N is a positive integer); ICP calculation through two groups of sampling points, then pi(xi,yi) And q isj(xj,yj) The euclidean distance between is expressed as:
Figure BDA0003024845460000121
to find outSensing sampling point information pi(xi,yi) And combining the sampling point information qj(xj,yj) For the transformation matrix R and the translation matrix T of
qj=R×pi+T+Ni
And (3) solving the optimal solution by using a least square method:
Figure BDA0003024845460000131
thus, R and T were obtained when E was the smallest. T is a 1 x 2 matrix, and the value of the second row in T is the lateral distance Y. R is a 2 x 2 matrix, and the sum of all values in R is the yaw angle.
Step S122, correcting the geographical position information based on the yaw value.
After the lateral distance Y and the yaw angle of the linear information of the perception lane line and the linear information of the combined lane line are obtained, the error value of the linear information of the map lane line can be calculated through the lateral distance Y and the yaw angle, and therefore the geographical position information in the above embodiment can be corrected. Therefore, the jump of the positioning result is avoided, and the positioning accuracy and robustness are ensured.
Corresponding to the above embodiments provided by the present disclosure, the present disclosure also provides a lane recognition apparatus. The present disclosure also provides an apparatus embodiment adapted to the above embodiment, for implementing the method steps described in the above embodiment, and the explanation based on the same name and meaning is the same as that of the above embodiment, and has the same technical effect as that of the above embodiment, and is not described again here.
Fig. 3 shows an embodiment of a lane recognition apparatus provided by the present disclosure.
As shown in fig. 3, the present disclosure provides a lane recognition device 300 including:
a perception information obtaining unit 301, configured to obtain linear information, number, and geographical location information of a perception lane line;
a map information obtaining unit 302, configured to obtain linear information of a map lane line based on the geographic position information, where the linear information of the map lane line and the linear information of the sensing lane line are both represented information of a same type of coordinate system;
a combined information obtaining unit 303, configured to obtain linear information of each group of combined lane lines from the linear information of the map lane lines, where the number of the combined lane lines in each group is the same as the number of the sensing lane lines;
a matching unit 304, configured to match the linear information of the perceived lane line with the linear information of each group of combined lane lines, and determine a matching object of the linear information of the perceived lane line.
Optionally, the matching unit 304 includes:
the first acquisition subunit is used for acquiring mutually-correlated sensing sampling point information and combined sampling point information of each group of combined lane lines, wherein the sensing sampling point information is information of sampling points in the linear information of the sensing lane lines, and the combined sampling point information is information of sampling points in the linear information of each group of combined lane lines;
the matching calculation subunit is used for performing matching calculation based on the sensing sampling point information and the combined sampling point information of each group of combined lane lines to obtain a calculation result corresponding to each group of combined lane lines;
and the object determining subunit is used for determining the linear information of the combined lane line corresponding to the calculation result as a matching object of the linear information of the perception lane line when the calculation result meets a preset matching condition.
Optionally, the first obtaining subunit includes:
the first determining subunit is used for determining that the abscissa information of the combined sampling point information is the same as the abscissa information of the perceived sampling point information;
the second determining subunit is used for determining the ordinate information of the combined sampling point information based on the abscissa information of the combined sampling point information and the linear information of the combined lane line;
and the third determining subunit is used for determining the ordinate information of the sensing sampling point information based on the abscissa information of the sensing sampling point information and the linear information of the sensing lane line.
Optionally, the first obtaining subunit includes:
the vertical coordinate determining subunit is used for determining that the vertical coordinate information of the combined sampling point information is the same as the vertical coordinate information of the sensing sampling point information;
the first abscissa determining subunit is used for determining the abscissa information of the combined sampling point information based on the ordinate information of the combined sampling point information and the linear information of the combined lane line;
and the second abscissa determining subunit is used for determining the abscissa information of the sensing sampling point information based on the ordinate information of the sensing sampling point information and the linear information of the sensing lane line.
Optionally, the matching calculation subunit includes:
and the first result acquisition subunit is used for calculating the root mean square error based on the information of the sensing sampling points and the information of the combined sampling points of each group of combined lane lines, and acquiring the calculation result corresponding to each group of combined lane lines.
Optionally, the object determining subunit includes:
a second acquisition subunit configured to acquire historically multiple valid calculation results, the alignment information of the historically combined lane line associated with the valid calculation results being determined as a matching object of the alignment information of the historically sensed lane line;
a threshold obtaining subunit, configured to determine that an average value of the effective calculation results is a matching threshold;
and the fourth determining subunit is configured to determine, when the calculation result is smaller than the matching threshold, that the linear information of the combined lane line corresponding to the calculation result is a matching object of the linear information of the perceived lane line.
Optionally, the object determining subunit includes:
and the fifth determining subunit is configured to determine, when the calculation result is the minimum value among all the calculation results, that the linear information of the combined lane line corresponding to the minimum value is a matching object of the linear information of the perceived lane line.
Optionally, the matching unit 304 further includes:
and the sixth determining subunit is configured to determine that the calculation result is an effective calculation result when the calculation result meets a preset matching condition.
Optionally, the apparatus further comprises:
the yaw calculation unit is used for performing yaw calculation on the linear information of the perception lane line and the linear information of the combined lane line to acquire a yaw value after determining that the linear information of the combined lane line is a matching object of the linear information of the perception lane line;
a correction unit for correcting the geographical position information based on the yaw value.
Optionally, the line-shaped information of the map lane line and the line-shaped information of the perception lane line are placed in the same coordinate system.
The method comprises the steps of obtaining linear information of a combined lane line from the linear information of the map lane line, matching the linear information of the combined lane line with the linear information of a perception lane line, and determining a matching object of the linear information of the perception lane line. Attribute information and type information of the sensing lane line do not need to be detected, information loss caused by detection failure is avoided, and effectiveness and accuracy of matching are improved.
The disclosed embodiments provide some embodiments, namely an electronic device for a lane recognition method, the electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the one processor to enable the at least one processor to perform the lane recognition method of the above embodiment.
The disclosed embodiments provide some embodiments, namely a data processing computer storage medium for lane recognition, the computer storage medium storing computer executable instructions that can execute the lane recognition method as described in the above embodiments.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (18)

1. A lane recognition method, characterized by comprising:
acquiring linear information, quantity and geographical position information of a perception lane line;
acquiring the linear information of a map lane line based on the geographic position information, wherein the linear information of the map lane line and the linear information of the perception lane line are represented by the same type of coordinate system;
acquiring the linear information of each group of combined lane lines from the linear information of the map lane lines, wherein the number of the combined lane lines in each group is the same as that of the perception lane lines;
and matching the linear information of the perception lane line with the linear information of each group of combined lane lines to determine a matching object of the linear information of the perception lane line.
2. The method according to claim 1, wherein the matching the alignment information of the perceived lane line with the alignment information of each group of combined lane lines to determine a matching object of the alignment information of the perceived lane line comprises:
acquiring mutually-correlated sensing sampling point information and combined sampling point information of each group of combined lane lines, wherein the sensing sampling point information is information of a sampling point in linear information of the sensing lane lines, and the combined sampling point information is information of a sampling point in linear information of each group of combined lane lines;
performing matching calculation based on the sensing sampling point information and the combined sampling point information of each group of combined lane lines to obtain a calculation result corresponding to each group of combined lane lines;
and when the calculation result meets a preset matching condition, determining the linear information of the combined lane line corresponding to the calculation result as a matching object of the linear information of the perception lane line.
3. The method of claim 2, wherein the obtaining correlated perceptual and combined sample point information comprises:
determining that the abscissa information of the combined sampling point information is the same as the abscissa information of the sensing sampling point information;
determining the ordinate information of the combined sampling point information based on the abscissa information of the combined sampling point information and the linear information of the combined lane line;
and determining the ordinate information of the sensing sampling point information based on the abscissa information of the sensing sampling point information and the linear information of the sensing lane line.
4. The method of claim 2, wherein the obtaining correlated perceptual and combined sample point information comprises:
determining that the ordinate information of the combined sampling point information is the same as the ordinate information of the sensing sampling point information;
determining the abscissa information of the combined sampling point information based on the ordinate information of the combined sampling point information and the linear information of the combined lane line;
and determining the abscissa information of the sensing sampling point information based on the ordinate information of the sensing sampling point information and the linear information of the sensing lane line.
5. The method according to claim 2, wherein the performing matching calculation based on the information of the sensing sampling points and the information of the combined sampling points of each group of combined lane lines to obtain the calculation result corresponding to each group of combined lane lines comprises:
and calculating the root mean square error based on the information of the sensing sampling points and the information of the combined sampling points of each group of combined lane lines, and acquiring the calculation result corresponding to each group of combined lane lines.
6. The method according to claim 2, wherein when the calculation result satisfies a preset matching condition, determining that the alignment information of the combined lane line corresponding to the calculation result is a matching object of the alignment information of the perceived lane line comprises:
obtaining a plurality of historical effective calculation results, wherein the linear information of the historical combined lane line associated with the effective calculation results is determined as a matching object of the linear information of the historical perception lane line;
determining an average value of the valid calculation results as a matching threshold;
and when the calculation result is smaller than the matching threshold, determining that the linear information of the combined lane line corresponding to the calculation result is a matching object of the linear information of the perception lane line.
7. The method according to claim 2, wherein when the calculation result satisfies a preset matching condition, determining that the alignment information of the combined lane line corresponding to the calculation result is a matching object of the alignment information of the perceived lane line comprises:
and when the calculation result is the minimum value in all the calculation results, determining the linear information of the combined lane line corresponding to the minimum value as a matching object of the linear information of the perception lane line.
8. The method according to claim 2, wherein when the calculation result satisfies a preset matching condition, the method further comprises:
and determining the calculation result as a valid calculation result.
9. The method of claim 2, further comprising:
when the linear information of the combined lane line is determined to be a matching object of the linear information of the perception lane line, performing yaw calculation on the linear information of the perception lane line and the linear information of the combined lane line to obtain a yaw value;
correcting the geographic position information based on the yaw value.
10. The method of any one of claims 1-9, wherein the alignment information of the map lane lines and the alignment information of the perceived lane lines are placed in the same coordinate system.
11. A lane recognition apparatus, characterized by comprising:
the perception information acquisition unit is used for acquiring linear information, the number and the geographical position information of the perception lane lines;
the map information acquisition unit is used for acquiring the linear information of a map lane line based on the geographic position information, and the linear information of the map lane line and the linear information of the perception lane line are represented by the same type of coordinate system;
the combined information acquisition unit is used for acquiring the linear information of each group of combined lane lines from the linear information of the map lane lines, and the number of the combined lane lines in each group is the same as that of the perception lane lines;
and the matching unit is used for matching the linear information of the perception lane line with the linear information of each group of combined lane lines and determining a matching object of the linear information of the perception lane line.
12. The apparatus of claim 11, wherein the matching unit comprises:
the first acquisition subunit is used for acquiring mutually-correlated sensing sampling point information and combined sampling point information of each group of combined lane lines, wherein the sensing sampling point information is information of sampling points in the linear information of the sensing lane lines, and the combined sampling point information is information of sampling points in the linear information of each group of combined lane lines;
the matching calculation subunit is used for performing matching calculation based on the sensing sampling point information and the combined sampling point information of each group of combined lane lines to obtain a calculation result corresponding to each group of combined lane lines;
and the object determining subunit is used for determining the linear information of the combined lane line corresponding to the calculation result as a matching object of the linear information of the perception lane line when the calculation result meets a preset matching condition.
13. The apparatus of claim 12, wherein the first obtaining subunit comprises:
the first determining subunit is used for determining that the abscissa information of the combined sampling point information is the same as the abscissa information of the perceived sampling point information;
the second determining subunit is used for determining the ordinate information of the combined sampling point information based on the abscissa information of the combined sampling point information and the linear information of the combined lane line;
and the third determining subunit is used for determining the ordinate information of the sensing sampling point information based on the abscissa information of the sensing sampling point information and the linear information of the sensing lane line.
14. The apparatus of claim 12, wherein the match computation subunit comprises:
and the first result acquisition subunit is used for calculating the root mean square error based on the information of the sensing sampling points and the information of the combined sampling points of each group of combined lane lines, and acquiring the calculation result corresponding to each group of combined lane lines.
15. The apparatus of claim 12, wherein the object determination subunit comprises:
a second acquisition subunit configured to acquire historically multiple valid calculation results, the alignment information of the historically combined lane line associated with the valid calculation results being determined as a matching object of the alignment information of the historically sensed lane line;
a threshold obtaining subunit, configured to determine that an average value of the effective calculation results is a matching threshold;
and the fourth determining subunit is configured to determine, when the calculation result is smaller than the matching threshold, that the linear information of the combined lane line corresponding to the calculation result is a matching object of the linear information of the perceived lane line.
16. The apparatus of claim 12, wherein the object determination subunit comprises:
and the fifth determining subunit is configured to determine, when the calculation result is the minimum value among all the calculation results, that the linear information of the combined lane line corresponding to the minimum value is a matching object of the linear information of the perceived lane line.
17. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1 to 10.
18. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to carry out the method of any one of claims 1 to 10.
CN202110413035.8A 2021-04-16 2021-04-16 Lane recognition method, device, medium and electronic equipment Pending CN112949609A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110413035.8A CN112949609A (en) 2021-04-16 2021-04-16 Lane recognition method, device, medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110413035.8A CN112949609A (en) 2021-04-16 2021-04-16 Lane recognition method, device, medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN112949609A true CN112949609A (en) 2021-06-11

Family

ID=76232863

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110413035.8A Pending CN112949609A (en) 2021-04-16 2021-04-16 Lane recognition method, device, medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN112949609A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115147789A (en) * 2022-06-16 2022-10-04 禾多科技(北京)有限公司 Method, device, equipment and computer readable medium for detecting split and combined road information

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105444770A (en) * 2015-12-18 2016-03-30 上海交通大学 Intelligent mobile phone-based lane grade map generating and positioning system and method
CN107643086A (en) * 2016-07-22 2018-01-30 北京四维图新科技股份有限公司 A kind of vehicle positioning method, apparatus and system
CN110567480A (en) * 2019-09-12 2019-12-13 北京百度网讯科技有限公司 Optimization method, device and equipment for vehicle positioning and storage medium
US20200167575A1 (en) * 2018-11-28 2020-05-28 Here Global B.V. Method and system of a machine learning model for detection of physical dividers
CN111380538A (en) * 2018-12-28 2020-07-07 沈阳美行科技有限公司 Vehicle positioning method, navigation method and related device
CN111932887A (en) * 2020-08-17 2020-11-13 武汉四维图新科技有限公司 Method and equipment for generating lane-level track data
CN112166059A (en) * 2018-05-25 2021-01-01 Sk电信有限公司 Position estimation device for vehicle, position estimation method for vehicle, and computer-readable recording medium storing computer program programmed to execute the method
CN112560680A (en) * 2020-12-16 2021-03-26 北京百度网讯科技有限公司 Lane line processing method and device, electronic device and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105444770A (en) * 2015-12-18 2016-03-30 上海交通大学 Intelligent mobile phone-based lane grade map generating and positioning system and method
CN107643086A (en) * 2016-07-22 2018-01-30 北京四维图新科技股份有限公司 A kind of vehicle positioning method, apparatus and system
CN112166059A (en) * 2018-05-25 2021-01-01 Sk电信有限公司 Position estimation device for vehicle, position estimation method for vehicle, and computer-readable recording medium storing computer program programmed to execute the method
US20200167575A1 (en) * 2018-11-28 2020-05-28 Here Global B.V. Method and system of a machine learning model for detection of physical dividers
CN111380538A (en) * 2018-12-28 2020-07-07 沈阳美行科技有限公司 Vehicle positioning method, navigation method and related device
CN110567480A (en) * 2019-09-12 2019-12-13 北京百度网讯科技有限公司 Optimization method, device and equipment for vehicle positioning and storage medium
CN111932887A (en) * 2020-08-17 2020-11-13 武汉四维图新科技有限公司 Method and equipment for generating lane-level track data
CN112560680A (en) * 2020-12-16 2021-03-26 北京百度网讯科技有限公司 Lane line processing method and device, electronic device and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SCHLICHTING A等: "Map matching for vehicle localization based on serial LiDAR sensors", 《2019 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC)》, 31 December 2019 (2019-12-31), pages 1257 - 1262 *
欧科君: "基于视觉与地图的车道信息检测方法", 《中国优秀硕士学位论文全文数据库 (工程科技Ⅱ辑)》, no. 7, 15 July 2019 (2019-07-15), pages 035 - 154 *
郑诗晨等: "基于粒子滤波的行车轨迹路网匹配方法", 《地球信息科学学报》, vol. 22, no. 11, 31 December 2020 (2020-12-31), pages 2109 - 2117 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115147789A (en) * 2022-06-16 2022-10-04 禾多科技(北京)有限公司 Method, device, equipment and computer readable medium for detecting split and combined road information

Similar Documents

Publication Publication Date Title
CN105678689B (en) High-precision map data registration relation determining method and device
KR102249769B1 (en) Estimation method of 3D coordinate value for each pixel of 2D image and autonomous driving information estimation method using the same
CN108692719B (en) Object detection device
US7764284B2 (en) Method and system for detecting and evaluating 3D changes from images and a 3D reference model
KR102103944B1 (en) Distance and position estimation method of autonomous vehicle using mono camera
EP3606812A1 (en) Automated draft survey
US8837774B2 (en) Inverse stereo image matching for change detection
CN114252082B (en) Vehicle positioning method and device and electronic equipment
CN110274598B (en) Robot monocular vision robust positioning estimation method
WO2021017213A1 (en) Visual positioning effect self-detection method, and vehicle-mounted terminal
CN114332225A (en) Lane line matching positioning method, electronic device and storage medium
CN114280582A (en) Calibration and calibration method and device for laser radar, storage medium and electronic equipment
CN112949609A (en) Lane recognition method, device, medium and electronic equipment
CN114821530A (en) Deep learning-based lane line detection method and system
CN108389228B (en) Ground detection method, device and equipment
US11477371B2 (en) Partial image generating device, storage medium storing computer program for partial image generation and partial image generating method
CN115507752A (en) Monocular vision distance measurement method and system based on parallel environment elements
Fursov et al. Computing RPC using robust selection of GCPs
CN111260538A (en) Positioning and vehicle-mounted terminal based on long-baseline binocular fisheye camera
CN115272456A (en) Laser radar and camera online drift detection method, correction method, device and storage medium
CN114067555A (en) Registration method and device for data of multiple base stations, server and readable storage medium
CN117433511B (en) Multi-sensor fusion positioning method
EP3574472B1 (en) Apparatus and method for registering recorded images
EP4345750A1 (en) Position estimation system, position estimation method, and program
CN110770741B (en) Lane line identification method and device and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination