CN117928575A - Lane information extraction method, system, electronic device and storage medium - Google Patents

Lane information extraction method, system, electronic device and storage medium Download PDF

Info

Publication number
CN117928575A
CN117928575A CN202410336833.9A CN202410336833A CN117928575A CN 117928575 A CN117928575 A CN 117928575A CN 202410336833 A CN202410336833 A CN 202410336833A CN 117928575 A CN117928575 A CN 117928575A
Authority
CN
China
Prior art keywords
lane
coordinate system
image
local
world coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410336833.9A
Other languages
Chinese (zh)
Other versions
CN117928575B (en
Inventor
贾洋
李升甫
许濒支
孙晓鹏
南轲
姚周祥
张衡
李艳玲
刘霜辰
倪愿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Highway Planning Survey and Design Institute Ltd
Original Assignee
Sichuan Highway Planning Survey and Design Institute Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Highway Planning Survey and Design Institute Ltd filed Critical Sichuan Highway Planning Survey and Design Institute Ltd
Priority to CN202410336833.9A priority Critical patent/CN117928575B/en
Publication of CN117928575A publication Critical patent/CN117928575A/en
Application granted granted Critical
Publication of CN117928575B publication Critical patent/CN117928575B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the invention discloses a method, a system, electronic equipment and a storage medium for extracting lane information, which are characterized in that coordinates of a lane marking in an image coordinate system are obtained by acquiring vehicle-mounted track positioning data comprising ground lane marking sequence image information and time corresponding to the ground lane marking sequence image information, carrying out semantic segmentation on each frame of image, calculating image vanishing points of the lane marking, and transforming the image vanishing points of the lane marking and the coordinates of the lane marking in the image coordinate system by combining the track positioning data to obtain the position of a local lane marking in the world coordinate system; and carrying out covariance and distance calculation on the positions of the local lane markings in the world coordinate system, clustering, and curve fitting to obtain global lane information. The method can effectively reduce the extraction cost of the lane information of the automatic driving map, improve the production efficiency, enlarge the coverage area of the lane information extraction, improve the coverage area and the updating efficiency of the map and provide support for updating the lane information of the automatic driving map.

Description

Lane information extraction method, system, electronic device and storage medium
Technical Field
The disclosure relates to the technical fields of automatic driving maps and photogrammetry, in particular to a lane information extraction method, a lane information extraction system, electronic equipment and a storage medium.
Background
The development of automatic driving vehicles is rapid, and the automatic driving technology is a necessary trend of vehicle engineering development. The automatic driving technology conforms to the environment-friendly concept, conforms to the social development high-efficiency low-cost requirements, and is more convenient for people to work and live. The common navigation map has low precision and small information quantity, does not contain specific data such as lane information, road characteristic information and the like, and the high-precision electronic map is one of key factors for promoting the development of the unmanned vehicle. After the high-precision map is obtained, the unmanned vehicle does not need to sense the surrounding environment in real time to construct a local map, and proceeds while exploring, but only needs to accurately match the vehicle into the electronic map according to the sensed surrounding environment, so that a decision system can make a correct decision.
However, the existing automatic driving map lane information production mode based on professional mapping is high in acquisition cost, low in updating efficiency and small in coverage area, full-time and empty coverage of road information is difficult to realize, and the requirement of an automatic driving automobile on real-time updating of map lane information cannot be met. With the further improvement of the Beidou No. three global networking and the localization of the multi-frequency and multi-constellation positioning chip, the high-precision and low-cost civil positioning terminal is gradually popularized, and support is provided for the extraction of the automatic driving map elements based on the crowdsourcing data such as vehicle-mounted track and sequence image data.
Disclosure of Invention
The invention provides a method, a system, electronic equipment and a storage medium for extracting lane information, which solve the problems that the acquisition cost is high, the updating efficiency is low, the coverage area is small, the full-time empty coverage of road information is difficult to realize, and the requirement of an automatic driving automobile on the real-time updating of map lane information cannot be met. By combining crowd-sourced sequence image information and crowd-sourced vehicle-mounted track data, the automatic driving map lane information is extracted, so that the extraction cost of the automatic driving map lane information can be effectively reduced, the production efficiency is improved, the coverage area of the extraction of the lane information is enlarged, the coverage area and the updating efficiency of the map can be improved, and a support is provided for updating the automatic driving map lane information. The method can plan the movement path in advance, select the most reasonable lane for driving, improve the intelligence and the comfort of the vehicle, and has important significance for the development of automatic driving.
In a first aspect, an embodiment of the present disclosure provides a lane information extraction method, including: acquiring vehicle-mounted track positioning data corresponding to the sequence image data and time of the ground lane marking, carrying out semantic segmentation on each frame of image data, and then calculating an image vanishing point of the lane marking to obtain coordinates of the lane marking in an image coordinate system; combining the vehicle-mounted track positioning data, and transforming the image vanishing point of the lane mark and the coordinates of the lane mark in the image coordinate system to obtain the position of the local lane mark in the world coordinate system; and clustering the position of the local lane marking in the world coordinate system by covariance and distance calculation, selecting a lane core point for each cluster, and performing curve fitting on all the lane core points to determine a curve between two continuous core points so as to obtain global lane data.
With reference to the embodiment of the first aspect, in some embodiments, the obtaining vehicle-mounted track positioning data including ground lane marking sequence image data and time-corresponding vehicle-mounted track positioning data, performing semantic segmentation on each frame of image data, and then obtaining coordinates of the lane marking in an image coordinate system by calculating an image vanishing point of the lane marking includes:
Acquiring vehicle-mounted track positioning data corresponding to each frame in the automatic driving map image and comprising ground lane marking line sequence image data and time thereof
Wherein/>For/>Data set of moments,/>For/>Track positioning data of moment,/>For/>Image data of time; performing semantic segmentation on the image data to obtain an image vanishing point of the lane marking after the lane marking is in a target range of an image coordinate system, and obtaining a coordinate of the lane marking in the image coordinate system, wherein the image vanishing point of the lane marking is/> Wherein/>Is the normal vector of the large circle corresponding to the lane marking,/>Is an internal reference of the camera,/>For/>Endpoint of lane marking,/>Is the direction of vanishing points,/>Extracting normal vector set of Gaussian sphere large circles corresponding to the lane mark from each frame of image data of the lane mark, wherein/(is)Is the normal vector corresponding to the Nth lane marking,/>Image coordinates of vanishing points for images of lane markings,/>For/>And (5) marking the lane.
With reference to the embodiment of the first aspect, in some embodiments, the combining vehicle track positioning data, transforming the image vanishing point of the lane marking and coordinates of the lane marking in the image coordinate system to obtain a position of the local lane marking in the world coordinate system includes:
Assuming the ground is a plane, the position of the local lane marking in the world coordinate system is =/>Wherein/>,/>Longitude, latitude and altitude values of the lane mark in the world coordinate system,/>, respectivelyFor a fixed deviation of the camera from the onboard GNSS direction,/>For camera pitch angle,/>,/>Is the coordinates of the lane marking in the image coordinate system,/>,/>,/>Longitude, latitude and height of vehicle-mounted track positioning data,/>For camera focal length,/>Ordinate of vanishing point in image for extracted lane markings,/>Is the direction of the lane marking,/>Is the direction of the track,/>For azimuthal rotation matrix,/>Is pitch angle rotation matrix,/>Extracting the inverse of the normal vector set of the Gaussian sphere great circle corresponding to the lane mark for each frame image number of the lane mark,/>Is the total number of lane markings.
With reference to the embodiment of the first aspect, in some embodiments, the clustering the covariance and the distance calculation of the position of the local lane marking in the world coordinate system, selecting a lane core point for each cluster, and performing curve fitting on all the lane core points to determine a curve between two consecutive core points, thereby obtaining global lane data, including: covariance of position of local lane markings in world coordinate system in lateral and longitudinal directions The distance between the locations of the local lane markings in the world coordinate system is/>The directional difference between the locations of the local lane markings in the world coordinate system is/>Wherein/>For the position/>, of two local lane markings in the world coordinate systemDistance between/>For direction difference,/>Is the mean value of the position abscissa of the local lane marking in the world coordinate system,/>For the longitudinal covariance of the position of the local lane markings in the world coordinate system,/>Is the ordinate of the position of the local lane marking in the world coordinate system,/>Is the mean value of the ordinate of the position of the local lane marking in the world coordinate system,/>Covariance matrix of position of local lane marking in world coordinate system transversely and longitudinally,/>Covariance of the position of the local lane markings in the world coordinate system in the lateral direction,/>For the number of locations of local lane markings in the world coordinate system,/>,/>The directions of the positions of the two local lane markings in the world coordinate system are given; integrating the positions of all local lane markings in a world coordinate system, and clustering all data through a distance and direction difference function; selecting a lane core point for each cluster, and performing curve fitting on all the lane core points to determine a curve between two continuous core points so as to obtain global lane data, wherein a curve fitting equation is/> Wherein/>Is the tangential angle, s is the point on the curve and the starting point/>Arc length between/>Is the coordinates of the core point of the previous lane,
) For the coordinates of the core point of the next lane,/>Is the direction of the starting point on the curve,/>Is the curvature at the starting point on the curve,/>Is the rate of change of curvature,/>Is the length of the curved segment.
In a second aspect, an embodiment of the present disclosure provides a lane information extraction system, including: the image coordinate unit is used for acquiring vehicle-mounted track positioning data corresponding to the sequence image data and time of the ground lane marking, carrying out semantic segmentation on each frame of image data, and then calculating an image vanishing point of the lane marking to obtain coordinates of the lane marking in an image coordinate system; the local lane unit is used for combining the vehicle-mounted track positioning data and transforming the image vanishing point of the lane marking and the coordinates of the lane marking in the image coordinate system to obtain the position of the local lane marking in the world coordinate system; and the global lane unit is used for clustering the covariance and distance calculation of the positions of the local lane markings in the world coordinate system, selecting lane core points for each cluster, and performing curve fitting on all the lane core points to determine a curve between two continuous core points so as to obtain global lane data.
With reference to the second aspect of the embodiments, in some embodiments, the image coordinate unit obtains vehicle-mounted track positioning data including ground lane marking sequence image data and time-corresponding vehicle-mounted track positioning data, performs semantic segmentation on each frame of image data, and then calculates an image vanishing point of the lane marking to obtain coordinates of the lane marking in an image coordinate system, including: acquiring vehicle-mounted track positioning data corresponding to each frame in the automatic driving map image and comprising ground lane marking line sequence image data and time thereofWherein/>For/>Data set of moments,/>For/>Track positioning data of moment,/>For/>Image data of time; performing semantic segmentation on the image data to obtain an image vanishing point of the lane marking after the lane marking is in a target range of an image coordinate system, and obtaining a coordinate of the lane marking in the image coordinate system, wherein the image vanishing point of the lane marking is/> Wherein/>Is the normal vector of the large circle corresponding to the lane marking,/>Is an internal reference of the camera,/>Is the endpoint of the Nth lane marking,/>Is the direction of vanishing points,/>Extracting normal vector set of Gaussian sphere large circles corresponding to the lane mark from each frame of image data of the lane mark, wherein/(is)Is the normal vector corresponding to the Nth lane marking,/>Image coordinates of vanishing points for images of lane markings,/>For/>And (5) marking the lane.
With reference to the second aspect of the embodiments, in some embodiments, the local lane unit, in combination with vehicle track positioning data, transforms an image vanishing point of a lane marking and coordinates of the lane marking in an image coordinate system to obtain a position of the local lane marking in a world coordinate system, including: assuming the ground is a plane, the position of the local lane marking in the world coordinate system is =/>Wherein/>,/>,/>The longitude, latitude and altitude values of the lane mark in the world coordinate system are respectively,For a fixed deviation of the camera from the onboard GNSS direction,/>For camera pitch angle,/>,/>Is the coordinates of the lane marking in the image coordinate system,/>,/>,/>Longitude, latitude and height of vehicle-mounted track positioning data,/>For camera focal length,/>Ordinate of vanishing point in image for extracted lane markings,/>Is the direction of the lane marking,/>Is the direction of the track and is the direction of the track,For azimuthal rotation matrix,/>Is pitch angle rotation matrix,/>For each frame of image data of the lane mark, extracting the inverse of the normal vector set of the Gaussian sphere large circle corresponding to the lane mark,/>Is the total number of lane markings.
With reference to the second aspect of the embodiments, in some embodiments, the global lane unit clusters the covariance and the distance calculation of the position of the local lane marking in the world coordinate system, selects a lane core point for each cluster, and performs curve fitting on all the lane core points to determine a curve between two consecutive core points to obtain global lane data, and includes: covariance of position of local lane markings in world coordinate system in lateral and longitudinal directions The distance between the locations of the local lane markings in the world coordinate system is/>The directional difference between the locations of the local lane markings in the world coordinate system is/>
Wherein,For the position/>, of two local lane markings in the world coordinate systemDistance between/>For direction difference,/>Is the mean value of the position abscissa of the local lane marking in the world coordinate system,/>For the longitudinal covariance of the position of the local lane markings in the world coordinate system,/>Is the ordinate of the position of the local lane marking in the world coordinate system,/>Is the mean value of the ordinate of the position of the local lane marking in the world coordinate system,/>Covariance matrix of position of local lane marking in world coordinate system transversely and longitudinally,/>Covariance of the position of the local lane markings in the world coordinate system in the lateral direction,/>For the number of locations of local lane markings in the world coordinate system,/>,/>The directions of the positions of the two local lane markings in the world coordinate system are given; integrating the positions of all local lane markings in a world coordinate system, and clustering all data through a distance and direction difference function; selecting a lane core point for each cluster, and performing curve fitting on all the lane core points to determine a curve between two continuous core points so as to obtain global lane data, wherein a curve fitting equation is that Wherein/>Is the tangential angle, s is the point on the curve and the starting point/>Arc length between/>Is the coordinates of the core point of the previous lane, (/ >)) For the coordinates of the core point of the next lane,/>Is the direction of the starting point on the curve,/>Is the curvature at the starting point on the curve,/>Is the rate of change of curvature,/>Is the length of the curved segment. In a third aspect, an embodiment of the present disclosure provides an electronic device, including: one or more processors; and a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the lane information extraction method as described in the first aspect.
In a fourth aspect, an embodiment of the present disclosure provides a computer-readable medium having stored thereon a computer program which, when executed by a processor, implements the steps of the lane information extraction method as described in the first aspect.
The invention has the beneficial effects that: the method comprises the steps of obtaining vehicle-mounted track positioning data corresponding to ground lane marking sequence image data and time thereof, carrying out semantic segmentation on each frame of image data, and then obtaining coordinates of lane markings in an image coordinate system by calculating image vanishing points of the lane markings; combining the vehicle-mounted track positioning data, and transforming the image vanishing point of the lane mark and the coordinates of the lane mark in the image coordinate system to obtain the position of the local lane mark in the world coordinate system; and clustering the position of the local lane marking in the world coordinate system by covariance and distance calculation, selecting a lane core point for each cluster, and performing curve fitting on all the lane core points to determine a curve between two continuous core points so as to obtain global lane data. The method solves the problems that the acquisition cost is high, the updating efficiency is low, the coverage area is small, the full-time empty coverage of the road information is difficult to realize, and the real-time updating requirement of the automatic driving automobile on the map lane information cannot be met. By combining crowd-sourced sequence image information and crowd-sourced vehicle-mounted track data, the automatic driving map lane information is extracted, so that the extraction cost of the automatic driving map lane information can be effectively reduced, the production efficiency is improved, the coverage area of the extraction of the lane information is enlarged, the coverage area and the updating efficiency of the map can be improved, and a support is provided for updating the automatic driving map lane information. The method can plan the movement path in advance, select the most reasonable lane for driving, improve the intelligence and the comfort of the vehicle, and has important significance for the development of automatic driving.
Drawings
FIG. 1 is a flow chart of one embodiment of a lane information extraction method according to the present disclosure;
FIG. 2 is a schematic diagram of a partial lane information extraction flow of the present disclosure;
FIG. 3 is a schematic diagram of a global lane information extraction flow of the present disclosure;
FIG. 4 is a schematic diagram of an overall technical flow of lane information extraction of the present disclosure;
fig. 5 is a schematic structural view of the lane information extraction system of the present disclosure;
fig. 6 is a schematic diagram of a basic structure of an electronic device provided according to an embodiment of the present disclosure.
Detailed Description
Referring to fig. 1, a flow of one embodiment of a lane information extraction method according to the present disclosure is shown. As shown in fig. 1, the method comprises the steps of:
as shown in fig. 2, step 101, acquiring vehicle-mounted track positioning data including ground lane marking sequence image data and time-corresponding vehicle-mounted track positioning data, performing semantic segmentation on each frame of image data, and then calculating an image vanishing point of a lane marking to obtain coordinates of the lane marking in an image coordinate system.
Here, step 101 includes:
The Beidou No. three global networking, further improvement of a foundation enhancement system and localization of multi-frequency and multi-constellation positioning chips are realized, the high-precision and low-cost civil positioning terminals are gradually popularized, and support is provided for automatic driving map element extraction based on crowdsourcing data such as vehicle-mounted tracks, sequence image data and the like. Acquiring vehicle-mounted track positioning data corresponding to each frame in the automatic driving map image and containing ground lane marking crowdsourcing sequence image information and time thereof through Beidou No. three Wherein/>For/>Data set of moments,/>For/>Track positioning data of moment,/>For/>Image data of time; dividing the acquired data set into two parts, wherein one part is manually marked to be used as a training data set, the other part is used as a test training set, the training set is used for training a network, and the test set is used for testing the training degree of the network; and constructing a semantic segmentation framework of the lane image sequence by taking DeepLabV & lt3+ & gt as a basic framework and combining a depth residual error network, and carrying out semantic segmentation on the sequence image. The network architecture can be divided into three modules: encoding, decoding and spatial pooling pyramid (ASPP). Firstly, a convolutional neural network is used for carrying out lane image convolution operation in an encoding part, ASPP is used for carrying out multi-scale space convolution, and the image resolution is gradually improved through up-sampling in a decoding part.
The method comprises the steps of optimizing semantic segmentation network parameters of a lane image sequence by using a BP algorithm, training a semantic segmentation network of the lane image sequence by using a training set, testing the training degree by using a test set, performing semantic segmentation on a sequence image, and calculating to obtain a normal vector of a large circle corresponding to a lane marking after the lane marking is in a target range of an image coordinate systemTwo great circles of two parallel lines in the image space are intersected at one point on a Gaussian sphere, the rays from the center of the sphere to the intersection point are calculated, and the direction of the image vanishing point of the lane marking is calculated through singular value decompositionCalculating the image coordinates of the image vanishing points of the lane markings to obtain the coordinates of the lane markings in an image coordinate system, wherein the image coordinates/>, of the image vanishing points of the lane markings />Wherein/>Is the normal vector of the large circle corresponding to the lane marking,/>Is an internal reference of the camera,/>Is the endpoint of the Nth lane marking,/>Is the direction of vanishing points,/>Extracting normal vector set of Gaussian sphere large circles corresponding to the lane mark from each frame of image data of the lane mark, wherein/(is)Is the normal vector corresponding to the Nth lane marking,/>Image coordinates of vanishing points for images of lane markings,/>For/>And (5) marking the lane.
As shown in fig. 2, step 102, the image vanishing point of the lane marking and the coordinates of the lane marking in the vehicle coordinate system are transformed to obtain the position of the local lane marking in the world coordinate system by combining the track positioning data.
Here, step 102 includes: assuming the ground as a plane, combining the track positioning data, and transforming the image vanishing point of the lane marking and the coordinates of the lane marking in the vehicle coordinate system to obtain the position of the local lane marking in the world coordinate system, wherein the position of the local lane marking in the world coordinate system is that =/>Wherein/>,/>Longitude, latitude and altitude values of the lane mark in the world coordinate system,/>, respectivelyFor a fixed deviation of the camera from the onboard GNSS direction,/>For camera pitch angle,/>,/>Is the coordinates of the lane marking in the image coordinate system,/>,/>,/>Longitude, latitude and height of vehicle-mounted track positioning data,/>For camera focal length,/>Ordinate of vanishing point in image for extracted lane markings,/>Is the direction of the lane marking,/>Is the direction of the track,/>For azimuthal rotation matrix,/>Is pitch angle rotation matrix,/>For each frame of image data of the lane mark, extracting the inverse of the normal vector set of the Gaussian sphere large circle corresponding to the lane mark,/>Is the total number of lane markings.
Step 103, clustering the covariance and distance calculation of the positions of the local lane markings in the world coordinate system, selecting lane core points for each cluster, and performing curve fitting on all the lane core points to determine a curve between two continuous core points so as to obtain global lane data.
The method comprises the steps of obtaining vehicle-mounted track positioning data corresponding to ground lane marking sequence image data and time thereof, carrying out semantic segmentation on each frame of image data, and then obtaining coordinates of lane markings in an image coordinate system by calculating image vanishing points of the lane markings; combining the vehicle-mounted track positioning data, and transforming the image vanishing point of the lane mark and the coordinates of the lane mark in the image coordinate system to obtain the position of the local lane mark in the world coordinate system; and clustering the position of the local lane marking in the world coordinate system by covariance and distance calculation, selecting a lane core point for each cluster, and performing curve fitting on all the lane core points to determine a curve between two continuous core points so as to obtain global lane data. By combining crowd-sourced sequence image information and crowd-sourced vehicle-mounted track data, the automatic driving map lane information is extracted, so that the extraction cost of the automatic driving map lane information can be effectively reduced, the production efficiency is improved, the coverage area of the extraction of the lane information is enlarged, the coverage area and the updating efficiency of the map can be improved, and a support is provided for updating the automatic driving map lane information. The method can plan the movement path in advance, select the most reasonable lane for driving, improve the intelligence and the comfort of the vehicle, and has important significance for the development of automatic driving.
As shown in fig. 3, here, step 103 includes:
The position of a local lane marking extracted from the image data of the ground lane marking sequence in a certain frame in the world coordinate system is a local lane data point, and the local lane marking is affected by illumination change and building shielding, has more noise points, and is used for improving the extraction precision of lane data and obtaining the whole lane data. And integrating all the local lane data points, carrying out clustering treatment, removing noise points, determining all the lane data points belonging to the same road, and fitting the lane data points by using clothoid curves so as to obtain global lane geometric information.
Covariance of local lane data points in lateral and longitudinal directions Distance between local lane data points is/>Judging the approaching degree between the local lane data points, and when the distance between the local lane data points is smaller than a threshold value/>The two local lane data points are considered to be of the same type; the directional difference between the local lane data points is/>Wherein/>For two local lane data points/>Distance between/>For the direction difference,/>Is the mean value of the abscissa of the local lane data points,/>For the covariance of the local lane data points in the longitudinal direction,/>Is the ordinate of the local lane data point,/>Is the mean value of the ordinate of the data points of the local lane,/>For the covariance matrix of the local lane data points in the transverse and longitudinal directions,/>Covariance of local lane data points in lateral direction,/>Is the number of local lane data points,/>,/>The direction of the two local lane data points; the direction difference between the local lane data points is less than a threshold/>The two are considered to be the same cluster; integrating all local lane data points, clustering all local lane data points by using a DBSCAN algorithm through a distance and direction difference function, judging whether the local lane data points are core points or not through a threshold Eps (neighborhood radius) and MinPts (minimum number of points contained in the neighborhood) from the first point of the local lane data points, if so, establishing a class for the local lane data points, otherwise, setting the class as noise points, searching all the local lane data points connected in density from the core points, and expanding the class until no expandable local lane data points exist. And finally, dividing all the lane data points into three types of core points, boundary points and noise points, removing the noise points, and determining all the local lane data points belonging to the same cluster.
Dividing each cluster into windows with the width of 1.5 meters along the track advancing direction, clustering the local lane data points in each window again by using a DBSCAN algorithm, calculating the distance from each local lane data point in the cluster to other local lane data points, and selecting the local lane data point with the smallest distance as the lane core point of the window; fitting all lane core points by using curves to generate complete lane information with continuously changing curvature, starting from a first lane core point, solving, and determining curves between two continuous lane core points to obtain global lane geometric information, namely global lane data, wherein a curve fitting equation is that Wherein/>Is the tangential angle, s is the point on the curve and the starting point/>Arc length between/>Is the coordinates of the core point of the previous lane) For the coordinates of the core point of the next lane,/>Is the direction of the starting point on the curve,/>Is the curvature at the starting point on the curve,Is the rate of change of curvature,/>Is the length of the curved segment. As shown in fig. 4, the overall technology of the lane information extraction is to include two parts of local lane information extraction and global lane information extraction. In the process of extracting local lane information, firstly, acquiring a sequence image containing ground lane markings and corresponding vehicle-mounted track positioning information; extracting the relative position of the lane marking in a vehicle coordinate system by using a deep learning semantic segmentation algorithm for each frame of image; and then, combining the track positioning data, and calculating the absolute position of the local lane marking in the world coordinate system through inverse perspective transformation. For global lane information extraction, firstly, removing noise points from the absolute position information of the local lane mark calculated in the previous step by using an improved DBSCAN clustering algorithm; and fitting each cluster by using clothoid curves to obtain global lane information. The automatic driving map lane information extraction based on the vehicle-mounted track and the sequence image uses the low-cost crowdsourcing vehicle-mounted platform for data acquisition, so that on one hand, the production cost of a high-precision map can be reduced, on the other hand, the crowdsourcing vehicle-mounted platform has wide coverage range and higher data acquisition frequency, the coverage range and update efficiency of the map can be improved, and the method has important significance for the development of automatic driving.
With further reference to fig. 5, as an implementation of the method shown in fig. 1 described above, the present disclosure discloses a lane information extraction system, an embodiment of which corresponds to the embodiment of the method shown in fig. 1. The system is particularly applicable to a variety of electronic devices.
As shown in fig. 5, the system of the present embodiment includes:
The image coordinate unit 501 acquires vehicle-mounted track positioning data comprising ground lane marking sequence image data and time corresponding to the ground lane marking sequence image data, performs semantic segmentation on each frame of image data, and then calculates an image vanishing point of a lane marking to obtain coordinates of the lane marking in an image coordinate system;
The local lane unit 502 is used for combining the vehicle-mounted track positioning data and transforming the image vanishing point of the lane marking and the coordinates of the lane marking in the image coordinate system to obtain the position of the local lane marking in the world coordinate system;
The global lane unit 503 clusters the covariance and distance calculation of the positions of the local lane markings in the world coordinate system, selects the lane core points for each cluster, and performs curve fitting on all the lane core points to determine the curve between two continuous core points so as to obtain global lane data.
In some optional embodiments, the image coordinate unit 501 obtains vehicle-mounted track positioning data including ground lane marking sequence image data and time-corresponding vehicle-mounted track positioning data, performs semantic segmentation on each frame of image data, and then calculates an image vanishing point of a lane marking to obtain coordinates of the lane marking in an image coordinate system, including:
Acquiring vehicle-mounted track positioning data corresponding to each frame in the automatic driving map image and comprising ground lane marking line sequence image data and time thereof Wherein/>For/>Data set of moments,/>For/>Track positioning data of moment,/>For/>Image data of time; performing semantic segmentation on the image data to obtain an image vanishing point of the lane marking after the lane marking is in a target range of an image coordinate system to obtain a coordinate of the lane marking in the image coordinate system, wherein the image vanishing point of the lane marking is
Wherein,Is the normal vector of the large circle corresponding to the lane marking,/>Is an internal reference of the camera,/>Is the endpoint of the Nth lane marking,/>Is the direction of vanishing points,/>Extracting normal vector sets of Gaussian sphere large circles corresponding to the lane marked lines from each frame of image data of the lane marked lines,Is the normal vector corresponding to the Nth lane marking,/>Image coordinates of vanishing points for images of lane markings,/>For/>And (5) marking the lane.
In some alternative embodiments, the local lane unit 502, in combination with the vehicle track positioning data, transforms the image vanishing point of the lane marking and the coordinates of the lane marking in the image coordinate system to obtain the position of the local lane marking in the world coordinate system, including: assuming the ground is a plane, the position of the local lane marking in the world coordinate system is =/>Wherein/>,/>Longitude, latitude and altitude values of the lane mark in the world coordinate system,/>, respectivelyFor a fixed deviation of the camera from the onboard GNSS direction,/>For camera pitch angle,/>,/>Is the coordinates of the lane marking in the image coordinate system,/>,/>,/>Longitude, latitude and height of vehicle-mounted track positioning data,/>For camera focal length,/>Ordinate of vanishing point in image for extracted lane markings,/>Is the direction of the lane marking,/>Is the direction of the track,/>For azimuthal rotation matrix,/>Is pitch angle rotation matrix,/>For each frame of image data of the lane mark, extracting the inverse of the normal vector set of the Gaussian sphere large circle corresponding to the lane mark,/>Is the total number of lane markings.
In some alternative embodiments, the global lane unit 503 clusters the covariance and distance calculation of the position of the local lane marking in the world coordinate system, selects a lane core point for each cluster, and performs curve fitting on all the lane core points to determine a curve between two consecutive core points so as to obtain global lane data, including: covariance of position of local lane markings in world coordinate system in lateral and longitudinal directions The distance between the locations of the local lane markings in the world coordinate system is/>The directional difference between the locations of the local lane markings in the world coordinate system is/>Wherein/>For the position/>, of two local lane markings in the world coordinate systemDistance between/>For direction difference,/>Is the mean value of the position abscissa of the local lane marking in the world coordinate system,/>For the longitudinal covariance of the position of the local lane markings in the world coordinate system,/>Is the ordinate of the position of the local lane marking in the world coordinate system,/>Is the mean value of the ordinate of the position of the local lane marking in the world coordinate system,/>Covariance matrix of position of local lane marking in world coordinate system transversely and longitudinally,/>Covariance of the position of the local lane markings in the world coordinate system in the lateral direction,/>For the number of locations of local lane markings in the world coordinate system,/>,/>The directions of the positions of the two local lane markings in the world coordinate system are given; integrating the positions of all local lane markings in a world coordinate system, and clustering all data through a distance and direction difference function;
selecting a lane core point for each cluster, and performing curve fitting on all the lane core points to determine a curve between two continuous core points so as to obtain global lane data, wherein a curve fitting equation is that Wherein/>Is the tangential angle, s is the point on the curve and the starting point/>Arc length between (/ >)) Is the coordinates of the core point of the previous lane) For the coordinates of the core point of the next lane,/>Is the direction of the starting point on the curve,/>Is the curvature at the starting point on the curve,Is the rate of change of curvature,/>Is the length of the curved segment.
Referring now to fig. 6, a schematic diagram of an electronic device suitable for use in implementing embodiments of the present disclosure is shown. The electronic device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), etc., a fixed terminal such as a digital TV, a desktop computer, etc. The electronic device shown in fig. 6 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 6, the electronic device may include a processing means (e.g., a central processor, a graphics processor, etc.) 901, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 902 or a program loaded from a storage means 908 into a Random Access Memory (RAM) 903. In the RAM903, various programs and data required for the operation of the electronic device are also stored. The processing device 901, the ROM902, and the RAM903 are connected to each other through a bus 904. An input/output (I/O) interface 905 is also connected to the bus 904.
In general, the following devices may be connected to the I/O interface 905: input devices 906 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 907 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 908 including, for example, magnetic tape, hard disk, etc.; and a communication device 909. Communication means 909 may allow the electronic device to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 shows an electronic device having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. The computer program can be downloaded and installed from the network through the communication means 909, or installed from the storage means 908, or installed from the ROM 902. When executed by the processing device 901, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring vehicle-mounted track positioning data which contains the crowd-sourced sequence image information of the ground lane markings and time corresponding to the image information, carrying out semantic segmentation on each frame of image, and then calculating image vanishing points of the lane markings to obtain coordinates of the lane markings in an image coordinate system; combining the track positioning data, and transforming the image vanishing point of the lane marking and the coordinates of the lane marking in the vehicle coordinate system to obtain the position of the local lane marking in the world coordinate system; and clustering the position of the local lane marking in the world coordinate system by covariance and distance calculation, selecting a lane core point for each cluster, and performing curve fitting on all the lane core points to determine a curve between two continuous core points so as to obtain global lane information.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++, python and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).

Claims (10)

1. A lane information extraction method, characterized in that the method comprises:
Acquiring vehicle-mounted track positioning data corresponding to the sequence image data and time of the ground lane marking, carrying out semantic segmentation on each frame of image data, and then calculating an image vanishing point of the lane marking to obtain coordinates of the lane marking in an image coordinate system;
Combining the vehicle-mounted track positioning data, and transforming the image vanishing point of the lane mark and the coordinates of the lane mark in the image coordinate system to obtain the position of the local lane mark in the world coordinate system;
And clustering the position of the local lane marking in the world coordinate system by covariance and distance calculation, selecting a lane core point for each cluster, and performing curve fitting on all the lane core points to determine a curve between two continuous core points so as to obtain global lane data.
2. The method according to claim 1, wherein the steps of obtaining the vehicle-mounted track positioning data including the ground lane marking sequence image data and the time corresponding thereto, performing semantic segmentation on each frame of image data, and then obtaining the coordinates of the lane marking in the image coordinate system by calculating the image vanishing point of the lane marking, include: acquiring vehicle-mounted track positioning data corresponding to each frame in the automatic driving map image and comprising ground lane marking line sequence image data and time thereofWherein/>Is thatData set of moments,/>For/>Track positioning data of moment,/>For/>Image data of time; performing semantic segmentation on the image data to obtain an image vanishing point of the lane marking after the lane marking is in a target range of an image coordinate system, and obtaining a coordinate of the lane marking in the image coordinate system, wherein the image vanishing point of the lane marking is/> Wherein/>Is the normal vector of the large circle corresponding to the lane marking,/>Is an internal reference of the camera,/>Is the endpoint of the Nth lane marking,/>Is the direction of vanishing points,/>Extracting normal vector set of Gaussian sphere large circles corresponding to the lane mark from each frame of image data of the lane mark, wherein/(is)Is the normal vector corresponding to the Nth lane marking,/>Image coordinates of vanishing points for images of lane markings,/>For/>And (5) marking the lane.
3. The method according to claim 2, wherein the transforming the image vanishing point of the lane marking and the coordinates of the lane marking in the image coordinate system to obtain the position of the local lane marking in the world coordinate system by combining the vehicle-mounted track positioning data comprises:
Assuming the ground is a plane, the position of the local lane marking in the world coordinate system is =/>Wherein/>,/>Longitude, latitude and altitude values of the lane mark in the world coordinate system,/>, respectivelyFor a fixed deviation of the camera from the onboard GNSS direction,/>For camera pitch angle,/>,/>Is the coordinates of the lane marking in the image coordinate system,/>,/>,/>Longitude, latitude and height of vehicle-mounted track positioning data,/>For camera focal length,/>Ordinate of vanishing point in image for extracted lane markings,/>Is the direction of the lane marking,/>Is the direction of the track,/>For azimuthal rotation matrix,/>Is pitch angle rotation matrix,/>For each frame of image data of the lane mark, extracting the inverse of the normal vector set of the Gaussian sphere large circle corresponding to the lane mark,/>Is the total number of lane markings.
4. The method of claim 3, wherein the clustering the position of the local lane markings in the world coordinate system by covariance and distance calculation, selecting lane core points for each cluster, and curve fitting all lane core points to determine a curve between two consecutive core points to obtain global lane data comprises:
covariance of position of local lane markings in world coordinate system in lateral and longitudinal directions The distance between the locations of the local lane markings in the world coordinate system is/>The directional difference between the locations of the local lane markings in the world coordinate system is/>Wherein/>For the position/>, of two local lane markings in the world coordinate systemDistance between/>For direction difference,/>Is the mean value of the position abscissa of the local lane marking in the world coordinate system,/>For the longitudinal covariance of the position of the local lane markings in the world coordinate system,/>Is the ordinate of the position of the local lane marking in the world coordinate system,/>Is the mean value of the ordinate of the position of the local lane marking in the world coordinate system,/>Covariance matrix of position of local lane marking in world coordinate system transversely and longitudinally,/>Covariance of the position of the local lane markings in the world coordinate system in the lateral direction,/>For the number of locations of local lane markings in the world coordinate system,/>,/>The directions of the positions of the two local lane markings in the world coordinate system are given;
Integrating the positions of all local lane markings in a world coordinate system, and clustering all data through a distance and direction difference function; selecting a lane core point for each cluster, and performing curve fitting on all the lane core points to determine a curve between two continuous core points so as to obtain global lane data, wherein a curve fitting equation is that Wherein/>Is the tangential angle, s is the point on the curve and the starting point/>Arc length between (/ >)) Is the coordinates of the core point of the previous lane) For the coordinates of the core point of the next lane,/>Is the direction of the starting point on the curve,/>Is the curvature at the starting point on the curve,Is the rate of change of curvature,/>Is the length of the curved segment.
5. A lane information extraction system, the system comprising:
The image coordinate unit is used for acquiring vehicle-mounted track positioning data corresponding to the sequence image data and time of the ground lane marking, carrying out semantic segmentation on each frame of image data, and then calculating an image vanishing point of the lane marking to obtain coordinates of the lane marking in an image coordinate system;
The local lane unit is used for combining the vehicle-mounted track positioning data and transforming the image vanishing point of the lane marking and the coordinates of the lane marking in the image coordinate system to obtain the position of the local lane marking in the world coordinate system;
And the global lane unit is used for clustering the covariance and distance calculation of the positions of the local lane markings in the world coordinate system, selecting lane core points for each cluster, and performing curve fitting on all the lane core points to determine a curve between two continuous core points so as to obtain global lane data.
6. The system of claim 5, wherein the image coordinate unit obtains the coordinates of the lane markings in the image coordinate system by obtaining vehicle-mounted track positioning data including the ground lane marking sequence image data and time-corresponding vehicle-mounted track positioning data, semantically dividing each frame of image data, and calculating the image vanishing points of the lane markings, and comprises:
Acquiring vehicle-mounted track positioning data corresponding to each frame in the automatic driving map image and comprising ground lane marking line sequence image data and time thereof Wherein/>For/>Data set of moments,/>For/>Track positioning data of moment,/>For/>Image data of time;
Performing semantic segmentation on the image data to obtain an image vanishing point of the lane marking after the lane marking is in a target range of an image coordinate system to obtain a coordinate of the lane marking in the image coordinate system, wherein the image vanishing point of the lane marking is Wherein/>Is the normal vector of the large circle corresponding to the lane marking,/>Is an internal reference of the camera,/>For/>Endpoint of lane marking,/>Is the direction of vanishing points,/>Extracting normal vector set of Gaussian sphere large circles corresponding to the lane mark from each frame of image data of the lane mark, wherein/(is)Is the normal vector corresponding to the Nth lane marking,/>Image coordinates of vanishing points for images of lane markings,/>For/>And (5) marking the lane.
7. The system of claim 6, wherein the local lane unit, in combination with the vehicle-mounted track positioning data, transforms the image vanishing point of the lane marking and the coordinates of the lane marking in the image coordinate system to obtain the position of the local lane marking in the world coordinate system, comprises: assuming the ground is a plane, the position of the local lane marking in the world coordinate system is Wherein/>,/>,/>Longitude, latitude and altitude values of the lane mark in the world coordinate system,/>, respectivelyFor a fixed deviation of the camera from the onboard GNSS direction,/>For camera pitch angle,/>,/>For the coordinates of the lane markings in the image coordinate system,,/>,/>Longitude, latitude and height of vehicle-mounted track positioning data,/>For camera focal length,/>Ordinate of vanishing point in image for extracted lane markings,/>For the direction of the lane marked line,/>Is the direction of the track,/>For azimuthal rotation matrix,/>Is pitch angle rotation matrix,/>For each frame of image data of the lane mark, extracting the inverse of the normal vector set of the Gaussian sphere large circle corresponding to the lane mark,/>Is the total number of lane markings.
8. The system of claim 7, wherein the global lane unit clusters the covariance and distance calculations of the positions of the local lane markings in the world coordinate system, selects lane core points for each cluster, and curve fits all lane core points to determine a curve between consecutive two core points to obtain global lane data, comprising: covariance of position of local lane markings in world coordinate system in lateral and longitudinal directions The distance between the locations of the local lane markings in the world coordinate system is/>The directional difference between the locations of the local lane markings in the world coordinate system is/>Wherein/>For the position/>, of two local lane markings in the world coordinate systemDistance between/>For direction difference,/>Is the mean value of the position abscissa of the local lane marking in the world coordinate system,/>For the longitudinal covariance of the position of the local lane markings in the world coordinate system,/>Is the ordinate of the position of the local lane marking in the world coordinate system,/>Is the mean value of the ordinate of the position of the local lane marking in the world coordinate system,/>Covariance matrix of position of local lane marking in world coordinate system transversely and longitudinally,/>Covariance of the position of the local lane markings in the world coordinate system in the lateral direction,/>For the number of locations of local lane markings in the world coordinate system,/>,/>The directions of the positions of the two local lane markings in the world coordinate system are given; integrating the positions of all local lane markings in a world coordinate system, and clustering all data through a distance and direction difference function; selecting a lane core point for each cluster, and performing curve fitting on all the lane core points to determine a curve between two continuous core points so as to obtain global lane data, wherein a curve fitting equation is/> Wherein/>Is the tangential angle, s is the point on the curve and the starting point/>Arc length between (/ >)) Is the coordinates of the core point of the previous lane) For the coordinates of the core point of the next lane,/>Is the direction of the starting point on the curve,/>Is the curvature at the starting point on the curve,Is the rate of change of curvature,/>Is the length of the curved segment.
9. An electronic device, comprising: one or more processors; storage means for storing one or more programs which when executed by the one or more processors cause the one or more processors to implement the method of any of claims 1-4.
10. A computer readable medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any of claims 1-4.
CN202410336833.9A 2024-03-22 2024-03-22 Lane information extraction method, system, electronic device and storage medium Active CN117928575B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410336833.9A CN117928575B (en) 2024-03-22 2024-03-22 Lane information extraction method, system, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410336833.9A CN117928575B (en) 2024-03-22 2024-03-22 Lane information extraction method, system, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN117928575A true CN117928575A (en) 2024-04-26
CN117928575B CN117928575B (en) 2024-06-18

Family

ID=90757807

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410336833.9A Active CN117928575B (en) 2024-03-22 2024-03-22 Lane information extraction method, system, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN117928575B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101645717B1 (en) * 2015-01-28 2016-08-12 (주)한양정보통신 Apparatus and method for adaptive calibration of advanced driver assistance system
CN109900254A (en) * 2019-03-28 2019-06-18 合肥工业大学 A kind of the road gradient calculation method and its computing device of monocular vision
CN110163930A (en) * 2019-05-27 2019-08-23 北京百度网讯科技有限公司 Lane line generation method, device, equipment, system and readable storage medium storing program for executing
CN110345952A (en) * 2019-07-09 2019-10-18 同济人工智能研究院(苏州)有限公司 A kind of serializing lane line map constructing method and building system
US20210158567A1 (en) * 2018-06-05 2021-05-27 Beijing Sensetime Technology Development Co., Ltd. Visual positioning method and apparatus, electronic device, and system
CN114140759A (en) * 2021-12-08 2022-03-04 阿波罗智能技术(北京)有限公司 High-precision map lane line position determining method and device and automatic driving vehicle
CN115035138A (en) * 2022-08-10 2022-09-09 武汉御驾科技有限公司 Road surface gradient extraction method based on crowdsourcing data
CN115100615A (en) * 2022-06-23 2022-09-23 浙江大学 End-to-end lane line detection method based on deep learning
CN115265493A (en) * 2022-09-26 2022-11-01 四川省公路规划勘察设计研究院有限公司 Lane-level positioning method and device based on non-calibrated camera
CN115423879A (en) * 2022-08-31 2022-12-02 重庆长安汽车股份有限公司 Image acquisition equipment posture calibration method, device, equipment and storage medium
CN115690138A (en) * 2022-10-18 2023-02-03 武汉大学 Road boundary extraction and vectorization method fusing vehicle-mounted image and point cloud

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101645717B1 (en) * 2015-01-28 2016-08-12 (주)한양정보통신 Apparatus and method for adaptive calibration of advanced driver assistance system
US20210158567A1 (en) * 2018-06-05 2021-05-27 Beijing Sensetime Technology Development Co., Ltd. Visual positioning method and apparatus, electronic device, and system
CN109900254A (en) * 2019-03-28 2019-06-18 合肥工业大学 A kind of the road gradient calculation method and its computing device of monocular vision
CN110163930A (en) * 2019-05-27 2019-08-23 北京百度网讯科技有限公司 Lane line generation method, device, equipment, system and readable storage medium storing program for executing
CN110345952A (en) * 2019-07-09 2019-10-18 同济人工智能研究院(苏州)有限公司 A kind of serializing lane line map constructing method and building system
CN114140759A (en) * 2021-12-08 2022-03-04 阿波罗智能技术(北京)有限公司 High-precision map lane line position determining method and device and automatic driving vehicle
CN115100615A (en) * 2022-06-23 2022-09-23 浙江大学 End-to-end lane line detection method based on deep learning
CN115035138A (en) * 2022-08-10 2022-09-09 武汉御驾科技有限公司 Road surface gradient extraction method based on crowdsourcing data
CN115423879A (en) * 2022-08-31 2022-12-02 重庆长安汽车股份有限公司 Image acquisition equipment posture calibration method, device, equipment and storage medium
CN115265493A (en) * 2022-09-26 2022-11-01 四川省公路规划勘察设计研究院有限公司 Lane-level positioning method and device based on non-calibrated camera
CN115690138A (en) * 2022-10-18 2023-02-03 武汉大学 Road boundary extraction and vectorization method fusing vehicle-mounted image and point cloud

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
WEIRONG 等: "Extraction of lane markings using orientation and vanishing point constraints in structured road scenes", INTERNATIONAL JOURNAL OF COMPUTER MATHEMATICS, vol. 91, no. 11, 1 August 2013 (2013-08-01), pages 2359 - 2373 *
刘胤伯: "视觉消失点检测及其在车道线识别中的应用", 中国博士学位论文全文数据库 工程科技Ⅱ辑, no. 2023, 15 October 2023 (2023-10-15), pages 035 - 1 *
王晓云: "复杂环境下的道路检测算法研究", 中国优秀硕士学位论文全文数据库 信息科技辑, no. 2012, 15 October 2012 (2012-10-15), pages 138 - 2329 *

Also Published As

Publication number Publication date
CN117928575B (en) 2024-06-18

Similar Documents

Publication Publication Date Title
EP3109842B1 (en) Map-centric map matching method and apparatus
US11320836B2 (en) Algorithm and infrastructure for robust and efficient vehicle localization
KR102338270B1 (en) Method, apparatus, and computer readable storage medium for updating electronic map
US9222786B2 (en) Methods and systems for creating digital transportation networks
US20180225968A1 (en) Autonomous vehicle localization based on walsh kernel projection technique
CN110686686B (en) System and method for map matching
EP3663718A1 (en) Method and apparatus for estimating a localized position on a map
CN111465936B (en) System and method for determining new road on map
CN114993328B (en) Vehicle positioning evaluation method, device, equipment and computer readable medium
CN114459471B (en) Positioning information determining method and device, electronic equipment and storage medium
CN111275807A (en) 3D road modeling method and system
CN114549562A (en) UNet-based semi-automated oblique photography live-action model building singulation method, system, equipment and storage medium
CN110174892B (en) Vehicle orientation processing method, device, equipment and computer readable storage medium
CN117928575B (en) Lane information extraction method, system, electronic device and storage medium
CN114743395B (en) Signal lamp detection method, device, equipment and medium
US20220178701A1 (en) Systems and methods for positioning a target subject
CN115773744A (en) Model training and road network processing method, device, equipment, medium and product
CN115032672A (en) Fusion positioning method and system based on positioning subsystem
CN111210297B (en) Method and device for dividing boarding points
CN114063091A (en) High-precision positioning method and product
CN111383337A (en) Method and device for identifying objects
CN114526720B (en) Positioning processing method, device, equipment and storage medium
CN111461982B (en) Method and apparatus for splice point cloud
CN117606506A (en) Vehicle positioning method, device, electronic equipment and medium
Wang et al. Autonomous Driving-Oriented Cognitive Map Lane Generation and Location Recognition Algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant