CN110426051B - Lane line drawing method and device and storage medium - Google Patents

Lane line drawing method and device and storage medium Download PDF

Info

Publication number
CN110426051B
CN110426051B CN201910718639.6A CN201910718639A CN110426051B CN 110426051 B CN110426051 B CN 110426051B CN 201910718639 A CN201910718639 A CN 201910718639A CN 110426051 B CN110426051 B CN 110426051B
Authority
CN
China
Prior art keywords
lane line
lane
data
coordinate system
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910718639.6A
Other languages
Chinese (zh)
Other versions
CN110426051A (en
Inventor
胡铮铭
白海江
杨贵
陶靖琦
刘奋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Heading Data Intelligence Co Ltd
Original Assignee
Heading Data Intelligence Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Heading Data Intelligence Co Ltd filed Critical Heading Data Intelligence Co Ltd
Priority to CN201910718639.6A priority Critical patent/CN110426051B/en
Publication of CN110426051A publication Critical patent/CN110426051A/en
Application granted granted Critical
Publication of CN110426051B publication Critical patent/CN110426051B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

The invention relates to a lane line drawing method, a lane line drawing device and a storage medium, and belongs to the technical field of computers. The method comprises the following steps: collecting lane line data; extracting a lane line based on deep learning, and calculating three-dimensional scatter points of the lane line under a camera coordinate system; calculating the relative pose between the lane line images according to the inertia measurement data and the vehicle mileage data; according to the relative pose between the lane line images, splicing the three-dimensional scattered points to obtain a lane line, and associating the spliced lane line to a corresponding GPS position; and clustering the lane lines obtained by splicing, and fitting the clustered lane lines. By the scheme, the crowdsourcing data acquisition cost can be reduced on the premise of ensuring the drawing precision of the lane lines.

Description

Lane line drawing method and device and storage medium
Technical Field
The invention relates to the technical field of computers, in particular to a lane line drawing method, a lane line drawing device and a storage medium.
Background
In the field of automatic driving, a high-precision map is often required to be drawn for accurately controlling a vehicle and providing reliable reference for vehicle trajectory planning. High-precision mapping needs to be accurate to the lane line level, while in mapping, mapping based on crowdsourcing data requires higher data precision.
If directly when carrying out lane line drawing through the manual work, not only inefficiency exists the error in addition, and gathers the high accuracy point cloud through the survey and drawing vehicle, draws the lane line from the high accuracy point cloud data and though can ensure the degree of accuracy of lane line, nevertheless to collection equipment and point cloud data processing requirement higher, leads to the lane line to draw the cost higher.
Disclosure of Invention
In view of this, embodiments of the present invention provide a lane line drawing method, apparatus, and storage medium, which can reduce the lane line drawing cost.
In a first aspect of an embodiment of the present invention, a lane line drawing method is provided, including:
collecting lane line data, wherein the lane line data comprises lane line images, GPS data, inertia measurement data and mileage data;
detecting the lane line image based on deep learning, extracting a lane line, and calculating a three-dimensional scatter point of the lane line under a camera coordinate system;
calculating the relative pose between the lane line images according to the inertia measurement data and the mileage data;
according to the relative pose between the lane line images, splicing the three-dimensional scattered points to obtain a lane line, and associating the lane line obtained by splicing to a corresponding position in the GPS data;
and clustering the lane lines obtained by splicing, and fitting the clustered lane lines.
In a second aspect of the embodiments of the present invention, there is provided a lane line drawing device including:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring lane line data, and the lane line data comprises lane line images, GPS data, inertia measurement data and mileage data;
the extraction module is used for detecting the lane line image based on deep learning to extract a lane line and calculating three-dimensional scatter points of the lane line under a camera coordinate system;
the calculation module is used for calculating the relative pose between the lane line images according to the inertia measurement data and the mileage data;
the splicing module is used for splicing the three-dimensional scattered points to obtain a lane line according to the relative pose between the lane line images and associating the lane line obtained by splicing to a corresponding position in the GPS data;
and the fitting module is used for clustering the lane lines obtained by splicing and fitting the clustered lane lines.
In a third aspect of the embodiments of the present invention, there is provided an apparatus, including a memory, a processor, and a computer program stored in the memory and executable by the processor, where the processor executes the computer program to implement the steps of the method according to the first aspect of the embodiments of the present invention.
In a fourth aspect of the embodiments of the present invention, a computer-readable storage medium is provided, which stores a computer program, which when executed by a processor implements the steps of the method provided by the first aspect of the embodiments of the present invention.
In a fifth aspect of embodiments of the present invention, a computer program product is provided, the computer program product comprising a computer program that, when executed by one or more processors, performs the steps of the method provided in the first aspect of embodiments of the present invention.
In the embodiment of the invention, after lane line data are collected and lane lines in images are extracted, the lane lines are obtained by splicing three-dimensional scattered points based on the relative pose between the images, and the obtained lane lines are clustered and fitted to obtain real lane lines. The method can greatly reduce the precision requirement on the collected lane line data, and can accurately obtain the lane line through the low-price sensor, deep learning extraction and pose calculation. Compared with the method that lane lines are obtained through direct clustering, errors caused by data acquisition of cheap sensors and GPS equipment can be calibrated through relative pose calculation, accuracy of lane line drawing is guaranteed, drawing efficiency is guaranteed, data acquisition cost is reduced, and crowdsourcing data in a high-precision map is conveniently processed.
Drawings
Fig. 1 is a schematic flow chart of a lane line drawing method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a lane line drawing device according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a lane line drawing method, a lane line drawing device and a storage medium, which are used for reducing the lane line drawing cost in a high-precision map and reducing the data acquisition requirement.
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the embodiments described below are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The first embodiment is as follows:
referring to fig. 1, a schematic flow chart of a lane line drawing method according to an embodiment of the present invention includes:
s101, collecting lane line data, wherein the lane line data comprises lane line images, GPS data, inertia measurement data and mileage data;
the lane line image is a lane image shot by the vehicle-mounted camera, and the lane contains clear and distinguishable lane lines. The Inertial measurement data, that is, measurement data in an Inertial Measurement Unit (IMU), may obtain roll, pitch, and yaw data of the vehicle, and may facilitate extraction of the driving speed and acceleration of the vehicle, where the mileage data is vehicle body odometer measurement data. The lane line image, the GPS data, the inertia measurement data and the mileage data can be acquired by cheap equipment or sensors, and the requirement on the data measurement accuracy is not high.
Further, the lane line image, the inertia measurement data and the mileage data are collected, and time stamps based on the GPS data collection time are added to the lane line image, the inertia measurement data and the mileage data. The added timestamps can not only correlate data, but also facilitate the subsequent extraction and splicing of the lane lines based on time sequence.
S102, detecting the lane line image based on deep learning, extracting a lane line, and calculating a three-dimensional scatter point of the lane line under a camera coordinate system;
the deep learning based on the image characteristics can extract lane lines in the lane line image to obtain depth map information of the lane lines. The camera coordinate system is a three-dimensional coordinate system established by taking a camera focusing center as a coordinate origin optical axis as a z-axis, the depth information of the lane line is distributed to the space three-dimensional coordinate system, the lane line can be represented by three-dimensional scattered points, and the reconstruction of the lane line with low precision is facilitated.
Optionally, u and v are arbitrary coordinate points in a lane line image coordinate system, and u is an arbitrary coordinate point in the lane line image coordinate system0、v0Center coordinates, x, representing images of lane linesw、yw、zwRepresenting three-dimensional coordinate points in the world coordinate system, zcThe z-axis value representing the coordinates of the camera, i.e. the distance from the object to the camera, f represents the focal length of the camera, dx and dy represent the dimensions of the image sensor in the two coordinate axis directions under the image coordinate system of the lane line, and then the three-dimensional point coordinate system value of the world coordinate system is:
xw=zc(u-u0)dx/f
yw=zc(u-u0)dy/f
zw=zc
wherein z iscThe world coordinate system is defined under the camera coordinate system based on the estimation result of the deep learning model, and then the world coordinate system definition is coincided with the camera coordinate system.
Optionally, when the lane line image is collected by a binocular camera, matching a target lane line in images collected by the left eye camera and the right eye camera, calculating parallax between the left eye camera and the right eye camera based on ORB feature operator matching, and calculating the position of the lane line in a camera coordinate system according to the parallax. The depth information of the lane line can be acquired through the parallax of the left and right eye cameras.
S103, calculating the relative pose between the lane line images according to the inertia measurement data and the mileage data;
the lane line three-dimensional scattered point positions can be calibrated by calculating the relative pose, so that the lane lines can be spliced conveniently.
Specifically, after the initial pose of the lane line image is calculated, the pose of the lane line image is estimated based on a visual odometer; and taking the estimation result as an observed value of the extended Kalman filtering, checking the pose of the lane line image based on the mileage data and the positioning of the lane line in the lane line image, and calculating the relative pose of the lane line in the lane line image.
Illustratively, the lane line pose may be represented by a 3 × 3 rotation matrix, and the offset vector may be represented by a 3 × 1 matrix, where the 3 × 1 matrix may represent the relative pose.
S104, according to the relative position and posture between the lane line images, splicing the three-dimensional scattered points to obtain a lane line, and associating the lane line obtained by splicing to a corresponding position in the GPS data;
and (4) carrying out transformation splicing on the three-dimensional scattered points of the lane lines with different poses based on the relative poses between the images. The lane lines obtained by splicing are represented in a three-dimensional scattered point form or a three-dimensional scattered point connecting line, the number of the lane lines is multiple, and the lane lines or the three-dimensional scattered points of the lane lines are subjected to position association according to the corresponding relation between lane line images and GPS data acquisition time.
Preferably, a preset number of GPS signals and lane line three-dimensional scattered points corresponding to the highest confidence coefficient in the acquired lane line images are selected, and pose conversion is carried out on the lane line three-dimensional scattered points on the basis of the relative pose between the lane line images. And a plurality of scattered points with better GPS signals are selected, so that the lane line accuracy can be improved.
And S105, clustering the spliced lane lines, and fitting the clustered lane lines.
And the lane line clustering is to convert the spliced lane lines into a plane coordinate system and express the lane line aggregation relation through a two-dimensional lane line cluster. The fitted lane line represents the lane line through one lane line.
Specifically, converting lane line clusters obtained by lane line clustering into a Cartesian coordinate system, segmenting the lane line clusters according to a preset distance, and selecting points corresponding to peak values of the cut point distribution as lane line central points according to the cut point distribution; and fitting the lane line according to the center point of the lane line. Generally, the distribution of the cutting points conforms to binary normal distribution, the points corresponding to the peak of the normal distribution curve can represent the central points of the positions of the lane lines, and the fitted lane lines can be obtained by connecting the central points of a plurality of lane lines and then performing smoothing treatment.
It should be noted that in this embodiment, the lane line image includes a lane line, the lane line is processed to obtain three-dimensional scatter points of the lane line, and the reconstruction of the lane line may be performed by drawing the three-dimensional scatter points. The lane line image or the lane line pose can be represented based on the pose of the collected vehicle, generally comprises a coordinate position, a speed and a line angle, scattered points under the same coordinate system are transformed according to different poses, the lane lines can be conveniently spliced under the same pose, and the spliced lane lines and the fitted lane lines can be represented by three-dimensional scattered points.
According to the method provided by the embodiment, the problem that the accuracy of original data is not high is corrected through an algorithm based on crowdsourcing data acquired by cheap equipment or a sensor, the requirement of acquisition equipment can be reduced, the acquisition cost is further reduced, and the accuracy of lane line drawing is ensured.
Example two:
fig. 2 is a schematic structural diagram of a lane line drawing device according to a second embodiment of the present invention, including:
the acquisition module 210 is configured to acquire lane line data, where the lane line data includes lane line images, GPS data, inertia measurement data, and mileage data;
optionally, the acquiring lane line data further includes:
and acquiring the lane line image, the inertia measurement data and the mileage data, and adding a timestamp based on the GPS data acquisition time to the lane line image, the inertia measurement data and the mileage data.
The extraction module 220 is configured to detect the lane line image based on deep learning, extract a lane line, and calculate a three-dimensional scatter point of the lane line in a camera coordinate system;
optionally, the calculating the three-dimensional scatter of the lane line in the camera coordinate system specifically includes:
let u, v be arbitrary coordinate points under the image coordinate system of the lane line, u0、v0Center coordinates, x, representing images of lane linesw、yw、zwRepresenting three-dimensional coordinate points in the world coordinate system, zcThe z-axis value representing the coordinates of the camera, i.e. the distance from the object to the camera, f represents the focal length of the camera, dx and dy represent the dimensions of the image sensor in the two coordinate axis directions under the image coordinate system of the lane line, and then the three-dimensional point coordinate system value of the world coordinate system is:
xw=zc(u-u0)dx/f
yw=zc(u-u0)dy/f
zw=zc
wherein z iscThe world coordinate system is defined under the camera coordinate system based on the depth estimation result of the deep learning model, and then the world coordinate system definition is coincident with the camera coordinate system.
Optionally, the detecting the lane line image based on the deep learning to extract the lane line, and calculating the three-dimensional scatter of the lane line in the camera coordinate system further includes:
and when the lane line image is acquired by a binocular camera, matching target lane lines in the images acquired by the left eye camera and the right eye camera, calculating the parallax between the left eye camera and the right eye camera based on ORB characteristic operator matching, and solving the position of the lane lines in a camera coordinate system according to the parallax.
A calculating module 230, configured to calculate a relative pose between the lane line images according to the inertial measurement data and the mileage data;
optionally, the calculating module 230 includes:
the establishing unit is used for establishing a visual odometer according to the lane line image and the inertia measurement data;
the estimation unit is used for estimating the lane line image pose based on the visual odometer after calculating the initial pose of the lane line image;
and the calculating unit is used for checking the position and the posture of the lane line image based on the mileage data and the positioning of the lane line in the lane line image and calculating the relative position and the posture of the lane line in the lane line image by taking the estimation result as an observed value of the extended Kalman filtering.
The splicing module 240 is configured to splice the three-dimensional scatter points to obtain a lane line according to the relative pose between the lane line images, and associate the lane line obtained by splicing with a corresponding position in the GPS data;
optionally, the stitching the three-dimensional scatter points to obtain the lane line according to the relative pose between the lane line images further includes:
selecting a preset number of GPS signals in the acquired lane line images and lane line three-dimensional scattered points corresponding to the highest confidence coefficient, and carrying out pose conversion on the lane line three-dimensional scattered points on the basis of the relative poses among the lane line images.
And the fitting module 250 is used for clustering the lane lines obtained by splicing and fitting the clustered lane lines.
Optionally, the fitting module 250 includes:
the selecting unit is used for converting lane line clusters obtained by lane line clustering into a Cartesian coordinate system, dividing the lane line clusters according to a preset distance, and selecting points corresponding to peak values of the distribution of the cutting points as lane line central points according to the distribution of the cutting points;
and the fitting unit is used for fitting the lane line according to the center point of the lane line.
Through the device of this embodiment, can reduce lane line data acquisition cost, extract based on the lane line, the position appearance is calculated, concatenation and fitting can compensate the not enough of data precision from the algorithm, reduces equipment cost guarantee efficiency simultaneously.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
It will be understood by those skilled in the art that all or part of the steps in the method for implementing the above embodiments may be implemented by a program to instruct associated hardware, where the program may be stored in a computer-readable storage medium, and when the program is executed, the program includes steps S101 to S105, where the storage medium includes, for example: ROM/RAM, magnetic disk, optical disk, etc.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (9)

1. A lane line drawing method is characterized by comprising the following steps:
collecting lane line data, wherein the lane line data comprises lane line images, GPS data, inertia measurement data and mileage data;
detecting the lane line image based on deep learning, extracting a lane line, and calculating a three-dimensional scatter point of the lane line under a camera coordinate system;
calculating the relative pose between the lane line images according to the inertia measurement data and the mileage data;
wherein, the calculating the relative pose between the lane line images according to the inertia measurement data and the mileage data specifically comprises:
establishing a visual odometer according to the lane line image and the inertia measurement data;
after the initial pose of the lane line image is calculated, estimating the pose of the lane line image based on a visual odometer;
taking the estimation result as an observed value of extended Kalman filtering, checking the pose of the lane line image based on the mileage data and the positioning of the lane line in the lane line image, and calculating the relative pose of the lane line in the lane line image;
according to the relative pose between the lane line images, splicing the three-dimensional scattered points to obtain a lane line, and associating the lane line obtained by splicing to a position corresponding to the GPS data;
and clustering the lane lines obtained by splicing, and fitting the clustered lane lines.
2. The method of claim 1, wherein the collecting lane line data further comprises:
and acquiring the lane line image, the inertia measurement data and the mileage data, and adding a timestamp based on the GPS data acquisition time to the lane line image, the inertia measurement data and the mileage data.
3. The method according to claim 1, wherein the calculating the three-dimensional scatter of the lane line in the camera coordinate system is specifically:
let u, v be arbitrary coordinate points under the image coordinate system of the lane line, u0、v0Center coordinates, x, representing images of lane linesw、yw、zwRepresenting three-dimensional coordinate points in the world coordinate system, zcThe z-axis value representing the coordinates of the camera, i.e. the distance from the object to the camera, f represents the focal length of the camera, dx and dy represent the dimensions of the image sensor in the two coordinate axis directions under the image coordinate system of the lane line, and then the three-dimensional point coordinate system value of the world coordinate system is:
xw=zc(u-u0)dx/f
yw=zc(u-u0)dy/f
zw=zc
wherein z iscThe world coordinate system is defined under the camera coordinate system based on the estimation result of the deep learning model, and then the world coordinate system definition is coincided with the camera coordinate system.
4. The method according to claim 1 or 3, wherein the detecting the lane line image based on the deep learning to extract a lane line and calculating a three-dimensional scatter of the lane line in a camera coordinate system further comprises:
and when the lane line image is acquired by a binocular camera, matching target lane lines in the images acquired by the left eye camera and the right eye camera, calculating the parallax between the left eye camera and the right eye camera based on ORB characteristic operator matching, and solving the position of the lane lines in a camera coordinate system according to the parallax.
5. The method of claim 1, wherein the stitching the three-dimensional scatter points to obtain a lane line according to the relative pose between the lane line images further comprises:
and selecting a preset number of GPS signals in the lane line images and lane line three-dimensional scattered points corresponding to the highest confidence coefficient, and carrying out pose conversion on the lane line three-dimensional scattered points on the basis of the relative poses between the lane line images.
6. The method according to claim 1, wherein the clustering the lane lines obtained by splicing and the fitting of the clustered lane lines specifically comprise:
converting lane line clusters obtained by lane line clustering into a Cartesian coordinate system, segmenting the lane line clusters according to a preset distance, and selecting points corresponding to peak values of the segmentation point distribution as lane line central points according to the segmentation point distribution;
and fitting the lane line according to the center point of the lane line.
7. A lane line drawing device, comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring lane line data, and the lane line data comprises lane line images, GPS data, inertia measurement data and mileage data;
the extraction module is used for detecting the lane line image based on deep learning to extract a lane line and calculating three-dimensional scatter points of the lane line under a camera coordinate system;
the calculation module is used for calculating the relative pose between the lane line images according to the inertia measurement data and the mileage data;
wherein, the calculating the relative pose between the lane line images according to the inertia measurement data and the mileage data specifically comprises:
establishing a visual odometer according to the lane line image and the inertia measurement data;
after the initial pose of the lane line image is calculated, estimating the pose of the lane line image based on a visual odometer;
taking the estimation result as an observed value of extended Kalman filtering, checking the pose of the lane line image based on the mileage data and the positioning of the lane line in the lane line image, and calculating the relative pose of the lane line in the lane line image;
the splicing module is used for splicing the three-dimensional scattered points to obtain a lane line according to the relative pose between the lane line images and associating the lane line obtained by splicing to a corresponding position in the GPS data;
and the fitting module is used for clustering the lane lines obtained by splicing and fitting the clustered lane lines.
8. An apparatus for lane line drawing comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor when executing the computer program implements the steps of the lane line drawing method of any of claims 1 to 6.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the lane line drawing method according to any one of claims 1 to 6.
CN201910718639.6A 2019-08-05 2019-08-05 Lane line drawing method and device and storage medium Active CN110426051B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910718639.6A CN110426051B (en) 2019-08-05 2019-08-05 Lane line drawing method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910718639.6A CN110426051B (en) 2019-08-05 2019-08-05 Lane line drawing method and device and storage medium

Publications (2)

Publication Number Publication Date
CN110426051A CN110426051A (en) 2019-11-08
CN110426051B true CN110426051B (en) 2021-05-18

Family

ID=68412689

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910718639.6A Active CN110426051B (en) 2019-08-05 2019-08-05 Lane line drawing method and device and storage medium

Country Status (1)

Country Link
CN (1) CN110426051B (en)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018176000A1 (en) 2017-03-23 2018-09-27 DeepScale, Inc. Data synthesis for autonomous control systems
US10671349B2 (en) 2017-07-24 2020-06-02 Tesla, Inc. Accelerated mathematical engine
US11409692B2 (en) 2017-07-24 2022-08-09 Tesla, Inc. Vector computational unit
US11157441B2 (en) 2017-07-24 2021-10-26 Tesla, Inc. Computational array microprocessor system using non-consecutive data formatting
US11893393B2 (en) 2017-07-24 2024-02-06 Tesla, Inc. Computational array microprocessor system with hardware arbiter managing memory requests
US11561791B2 (en) 2018-02-01 2023-01-24 Tesla, Inc. Vector computational unit receiving data elements in parallel from a last row of a computational array
US11215999B2 (en) 2018-06-20 2022-01-04 Tesla, Inc. Data pipeline and deep learning system for autonomous driving
US11361457B2 (en) 2018-07-20 2022-06-14 Tesla, Inc. Annotation cross-labeling for autonomous control systems
US11636333B2 (en) 2018-07-26 2023-04-25 Tesla, Inc. Optimizing neural network structures for embedded systems
US11562231B2 (en) 2018-09-03 2023-01-24 Tesla, Inc. Neural networks for embedded devices
AU2019357615B2 (en) 2018-10-11 2023-09-14 Tesla, Inc. Systems and methods for training machine models with augmented data
US11196678B2 (en) 2018-10-25 2021-12-07 Tesla, Inc. QOS manager for system on a chip communications
US11816585B2 (en) 2018-12-03 2023-11-14 Tesla, Inc. Machine learning models operating at different frequencies for autonomous vehicles
US11537811B2 (en) 2018-12-04 2022-12-27 Tesla, Inc. Enhanced object detection for autonomous vehicles based on field view
US11610117B2 (en) 2018-12-27 2023-03-21 Tesla, Inc. System and method for adapting a neural network model on a hardware platform
US10997461B2 (en) 2019-02-01 2021-05-04 Tesla, Inc. Generating ground truth for machine learning from time series elements
US11567514B2 (en) 2019-02-11 2023-01-31 Tesla, Inc. Autonomous and user controlled vehicle summon to a target
US10956755B2 (en) 2019-02-19 2021-03-23 Tesla, Inc. Estimating object properties using visual image data
CN111209805B (en) * 2019-12-24 2022-05-31 武汉中海庭数据技术有限公司 Rapid fusion optimization method for multi-channel segment data of lane line crowdsourcing data
CN111199567B (en) * 2020-01-06 2023-09-12 河北科技大学 Lane line drawing method and device and terminal equipment
CN111127551A (en) * 2020-03-26 2020-05-08 北京三快在线科技有限公司 Target detection method and device
CN112050821B (en) * 2020-09-11 2021-08-20 湖北亿咖通科技有限公司 Lane line polymerization method
CN114252082B (en) * 2022-03-01 2022-05-17 苏州挚途科技有限公司 Vehicle positioning method and device and electronic equipment
CN116129389B (en) * 2023-03-27 2023-07-21 浙江零跑科技股份有限公司 Lane line acquisition method, computer equipment, readable storage medium and motor vehicle

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006153565A (en) * 2004-11-26 2006-06-15 Nissan Motor Co Ltd In-vehicle navigation device and own car position correction method
CN104766058A (en) * 2015-03-31 2015-07-08 百度在线网络技术(北京)有限公司 Method and device for obtaining lane line
CN104865578A (en) * 2015-05-12 2015-08-26 上海交通大学 Indoor parking lot high-precision map generation device and method
CN108256446A (en) * 2017-12-29 2018-07-06 百度在线网络技术(北京)有限公司 For determining the method, apparatus of the lane line in road and equipment
CN108470159A (en) * 2018-03-09 2018-08-31 腾讯科技(深圳)有限公司 Lane line data processing method, device, computer equipment and storage medium
CN108489482A (en) * 2018-02-13 2018-09-04 视辰信息科技(上海)有限公司 The realization method and system of vision inertia odometer
CN108960183A (en) * 2018-07-19 2018-12-07 北京航空航天大学 A kind of bend target identification system and method based on Multi-sensor Fusion
CN109460739A (en) * 2018-11-13 2019-03-12 广州小鹏汽车科技有限公司 Method for detecting lane lines and device
CN109902637A (en) * 2019-03-05 2019-06-18 长沙智能驾驶研究院有限公司 Method for detecting lane lines, device, computer equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10373003B2 (en) * 2017-08-22 2019-08-06 TuSimple Deep module and fitting module system and method for motion-based lane detection with multiple sensors

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006153565A (en) * 2004-11-26 2006-06-15 Nissan Motor Co Ltd In-vehicle navigation device and own car position correction method
CN104766058A (en) * 2015-03-31 2015-07-08 百度在线网络技术(北京)有限公司 Method and device for obtaining lane line
CN104865578A (en) * 2015-05-12 2015-08-26 上海交通大学 Indoor parking lot high-precision map generation device and method
CN108256446A (en) * 2017-12-29 2018-07-06 百度在线网络技术(北京)有限公司 For determining the method, apparatus of the lane line in road and equipment
CN108489482A (en) * 2018-02-13 2018-09-04 视辰信息科技(上海)有限公司 The realization method and system of vision inertia odometer
CN108470159A (en) * 2018-03-09 2018-08-31 腾讯科技(深圳)有限公司 Lane line data processing method, device, computer equipment and storage medium
CN108960183A (en) * 2018-07-19 2018-12-07 北京航空航天大学 A kind of bend target identification system and method based on Multi-sensor Fusion
CN109460739A (en) * 2018-11-13 2019-03-12 广州小鹏汽车科技有限公司 Method for detecting lane lines and device
CN109902637A (en) * 2019-03-05 2019-06-18 长沙智能驾驶研究院有限公司 Method for detecting lane lines, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN110426051A (en) 2019-11-08

Similar Documents

Publication Publication Date Title
CN110426051B (en) Lane line drawing method and device and storage medium
EP3407294B1 (en) Information processing method, device, and terminal
CN110389348B (en) Positioning and navigation method and device based on laser radar and binocular camera
CN107063228B (en) Target attitude calculation method based on binocular vision
CN110176032B (en) Three-dimensional reconstruction method and device
CN112634451B (en) Outdoor large-scene three-dimensional mapping method integrating multiple sensors
CN111340797A (en) Laser radar and binocular camera data fusion detection method and system
CN109472828B (en) Positioning method, positioning device, electronic equipment and computer readable storage medium
JP2016161569A (en) Method and system for obtaining 3d pose of object and 3d location of landmark point of object
CN102472612A (en) Three-dimensional object recognizing device and three-dimensional object recognizing method
CN108665499B (en) Near distance airplane pose measuring method based on parallax method
CN116255992A (en) Method and device for simultaneously positioning and mapping
CN103644904A (en) Visual navigation method based on SIFT (scale invariant feature transform) algorithm
CN112097732A (en) Binocular camera-based three-dimensional distance measurement method, system, equipment and readable storage medium
CN109903330A (en) A kind of method and apparatus handling data
CN111998862A (en) Dense binocular SLAM method based on BNN
CN111340834B (en) Lining plate assembly system and method based on laser radar and binocular camera data fusion
CN108399630B (en) Method for quickly measuring distance of target in region of interest in complex scene
CN112697044B (en) Static rigid object vision measurement method based on unmanned aerial vehicle platform
CN111191596B (en) Closed area drawing method, device and storage medium
Zhu et al. Research on cotton row detection algorithm based on binocular vision
CN114897974B (en) Target object space positioning method, system, storage medium and computer equipment
CN102542563A (en) Modeling method of forward direction monocular vision of mobile robot
CN112146647B (en) Binocular vision positioning method and chip for ground texture
CN114677435A (en) Point cloud panoramic fusion element extraction method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant