CN110426051A - A kind of lane line method for drafting, device and storage medium - Google Patents
A kind of lane line method for drafting, device and storage medium Download PDFInfo
- Publication number
- CN110426051A CN110426051A CN201910718639.6A CN201910718639A CN110426051A CN 110426051 A CN110426051 A CN 110426051A CN 201910718639 A CN201910718639 A CN 201910718639A CN 110426051 A CN110426051 A CN 110426051A
- Authority
- CN
- China
- Prior art keywords
- lane line
- line image
- image
- lane
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
- G01C21/30—Map- or contour-matching
- G01C21/32—Structuring or formatting of map data
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
- G01S19/45—Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
- G01S19/45—Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
- G01S19/47—Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial
Abstract
The present invention relates to a kind of lane line method for drafting, device and storage mediums, belong to field of computer technology.This method comprises: acquisition lane line data;Lane line is extracted based on deep learning, and calculates three-dimensional scatterplot of the lane line under camera coordinates system;According to inertia measurement data and vehicle mileage data, the relative pose between the line image of lane is calculated;According to the relative pose between the lane line image, splice three-dimensional scatterplot and obtain lane line, and the lane line of splicing is associated with corresponding GPS location;To the lane line cluster that splicing obtains, and it is fitted the lane line after cluster.It can reduce crowdsourcing data under the premise of ensureing that lane line draws precision with this solution and acquire old.
Description
Technical field
The present invention relates to field of computer technology more particularly to a kind of lane line method for drafting, device and storage medium.
Background technique
In automatic Pilot field, to be precisely controlled vehicle, reliable reference is provided for track of vehicle planning, often needs to draw high
Precision map.High-precision mapping needs to be accurate to lane line rank, and in cartography, the drafting based on crowdsourcing data
With regard to needing higher data precision.
If when directly by manually carrying out lane line drafting, not only low efficiency but also there are errors, and pass through mapping vehicle
High-precision dot cloud is acquired, although the accuracy of lane line can be ensured by extracting lane line from high-precision point cloud data, to adopting
Collect equipment and Point Cloud Processing is more demanding, lane line is caused to draw higher cost.
Summary of the invention
In view of this, can be reduced the embodiment of the invention provides a kind of lane line method for drafting, device and storage medium
Lane line draws old.
In the embodiment of the present invention in a first aspect, providing a kind of lane line method for drafting, comprising:
Acquire lane line data, the lane line data include lane line image, GPS data, inertia measurement data and in
Number of passes evidence;
The lane line image zooming-out lane line is detected based on deep learning, and calculates the lane line in camera coordinates system
Under three-dimensional scatterplot;
According to the inertia measurement data and the mileage, the relative pose between the lane line image is calculated;
According to the relative pose between the lane line image, splice the three-dimensional scatterplot and obtain lane line, and will splice
To lane line be associated with corresponding position in the GPS data;
To the lane line cluster that splicing obtains, and it is fitted the lane line after cluster.
In the second aspect of the embodiment of the present invention, a kind of lane line drawing apparatus is provided, comprising:
Acquisition module, for acquiring lane line data, the lane line data include lane line image, GPS data, inertia
Measurement data and mileage;
Extraction module for detecting the lane line image zooming-out lane line based on deep learning, and calculates the lane
Three-dimensional scatterplot of the line under camera coordinates system;
Computing module, for according to the inertia measurement data and the mileage, calculating between the lane line image
Relative pose;
Splicing module, for splicing the three-dimensional scatterplot and obtaining lane according to the relative pose between the lane line image
Line, and the lane line that splicing obtains is associated with corresponding position in the GPS data;
Fitting module, the lane line for obtaining to splicing clusters, and is fitted the lane line after cluster.
In the third aspect of the embodiment of the present invention, a kind of device is provided, including memory, processor and be stored in institute
The computer program that can be run in memory and in the processor is stated, the processor is realized when executing the computer program
Such as the step of first aspect the method for the embodiment of the present invention.
In the fourth aspect of the embodiment of the present invention, a kind of computer readable storage medium is provided, it is described computer-readable
Storage medium is stored with computer program, and first aspect of the embodiment of the present invention is realized when the computer program is executed by processor
The step of the method for offer.
5th aspect of the embodiment of the present invention, provides a kind of computer program product, the computer program product packet
Computer program is included, realizes that first aspect of the embodiment of the present invention mentions when the computer program is executed by one or more processors
The step of the method for confession.
In the embodiment of the present invention, by acquiring lane line data, extract in image after lane line, based on opposite between image
Pose splices three-dimensional scatterplot and obtains lane line, and the lane line cluster cluster fitting to obtaining obtains true lane line.This implementation
It can greatly reduce and the lane line data precision requirement of acquisition is extracted by cheap sensor by deep learning,
Pose calculating can accurately obtain lane line.Lane line is obtained compared to direct direct clustering, calculating by relative pose can be with
Cheap sensor, GPS device acquisition data bring error are calibrated, ensures that lane line is drawn accurate, while ensureing drafting effect
Rate, and reduce data and acquire old, the convenient processing to crowdsourcing data in high-precision map.
Detailed description of the invention
Fig. 1 is a kind of flow diagram of lane line method for drafting provided in an embodiment of the present invention;
Fig. 2 is a kind of structural schematic diagram of lane line drawing apparatus provided in an embodiment of the present invention.
Specific embodiment
The embodiment of the invention provides a kind of lane line method for drafting, device and storage mediums, for reducing accurately
Lane line draws cost in figure, reduces data acquisition request.
In order to make the invention's purpose, features and advantages of the invention more obvious and easy to understand, below in conjunction with the present invention
Attached drawing in embodiment, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that disclosed below
Embodiment be only a part of the embodiment of the present invention, and not all embodiment.Based on the embodiments of the present invention, this field
Those of ordinary skill's all other embodiment obtained without making creative work, belongs to protection of the present invention
Range.
Embodiment one:
Referring to Fig. 1, the flow diagram of lane line method for drafting provided in an embodiment of the present invention, comprising:
S101, acquisition lane line data, the lane line data include lane line image, GPS data, inertia measurement data
And mileage;
The lane line image is the carriageway image of in-vehicle camera shooting, includes clear and legible lane line in lane.
Measurement data in the inertia measurement data, that is, IMU (Inertial Measurement Unit, Inertial Measurement Unit), can be with
Vehicle rollover, pitching and yaw data are obtained, while convenient for extracting the travel speed and acceleration of vehicle, the mileage is
Vehicle body odometer measurement data.The lane line image, GPS data, inertia measurement data and mileage can be by honest and clean
Valence equipment or sensor acquisition, to DATA REASONING precision without high requirement.
Further, the lane line image, the inertia measurement data and the mileage are acquired, and is the vehicle
Road line image, the inertia measurement data and the mileage add the timestamp based on the GPS data acquisition time.Add
Add timestamp not only can with associated data, but also facilitate it is subsequent based on when ordered pair lane line extract splicing.
S102, the lane line image zooming-out lane line is detected based on deep learning, and calculates the lane line in camera
Three-dimensional scatterplot under coordinate system;
The deep learning can extract the lane line in the line image of lane based on the study of characteristics of image, obtain lane line
Depth map information.The camera coordinates system is by coordinate origin optical axis of camera focusing center be z-axis establish three-dimensional coordinate
System, is distributed to 3 d space coordinate system for the depth information of lane line, lane line can be indicated by three-dimensional scatterplot, convenient for essence
Not high lane line is spent to be reconstructed.
Optionally, if u, v are the arbitrary coordinate point under lane line image coordinate system, u0、v0Indicate the center of lane line image
Coordinate, xw、yw、zwIndicate the three-dimensional coordinate point under world coordinate system, zcIndicate that the z-axis value of camera coordinates, i.e. target arrive camera
Distance, f indicate that the focal length of camera, dx, dy indicate two change in coordinate axis direction of the imaging sensor under lane line image coordinate system
On size, then the three-dimensional point coordinate set occurrence of world coordinate system are as follows:
xw=zc(u-u0)dx/f
yw=zc(u-u0)dy/f
zw=zc
Wherein, zcIt is world coordinate system to be defined under camera coordinates system, then generation based on deep learning model estimation result
The definition of boundary's coordinate system and camera coordinates system are overlapped.
Optionally, when the lane line image is that binocular camera acquires, in the image of left mesh and the acquisition of right mesh camera
Target lane line matched, then based on ORB feature operator match, left mesh camera and right mesh camera parallax are calculated, according to institute
It states parallax and seeks lane line position in camera coordinates system.By the parallax of left and right mesh camera, the depth of available lane line
Information.
S103, according to the inertia measurement data and the mileage, calculate the opposite position between the lane line image
Appearance;
Vehicle Speed, course angle etc. have differences when due to lane line image taking, cause to be mapped to camera coordinates
There are angles, the offset of position position for lane line scatterplot under system, can calibrate lane line three-dimensional by calculating relative pose and dissipate
Point position, is convenient for lane splicing.
Specifically, view-based access control model odometer carries out lane line image pose after the initial pose of calculating lane line image
Estimation;Using estimation result as the observation of Extended Kalman filter, based on lane in the mileage and lane line image
The positioning of line checks lane line image pose, and calculates the relative pose of lane line in the line image of lane.
Illustratively, lane line pose can be indicated by 3 × 3 spin matrixs, indicate offset vector with 3 × 1 matrixes, should
3 × 1 matrixes can indicate relative pose.
S104, according to the relative pose between the lane line image, splice the three-dimensional scatterplot and obtain lane line, and will spell
The lane line connect is associated with corresponding position in the GPS data;
By the lane line three-dimensional scatterplot of different positions and pose, based on the relative pose between image, transformation spelling is carried out to three-dimensional scatterplot
It connects.It is described to splice obtained lane line in the form of three-dimensional scatterplot or three-dimensional scatterplot line indicates, the lane line have it is a plurality of, according to
The corresponding relationship of lane line image and GPS data acquisition time carries out position association to lane line or lane line three-dimensional scatterplot.
Preferably, the GPS signal of preset quantity and the corresponding lane of confidence level highest in the lane line image of acquisition are chosen
Line three-dimensional scatterplot carries out pose conversion to lane line three-dimensional scatterplot based on the relative pose between the lane line image.If choosing
The dry preferable scatterplot of GPS signal, can be improved lane line precision.
S105, the lane line that splicing obtains is clustered, and is fitted the lane line after cluster.
The lane line cluster is to convert the lane line of splicing to plane coordinate system, is indicated by two-dimentional lane line cluster
Lane line paradigmatic relation.The fitting lane line passes through a lane line characterization lane line.
Specifically, the lane line cluster that lane line clusters is transformed into cartesian coordinate system, by preset distance to lane
Line cluster is split, and is distributed according to cut point, and the corresponding point of peak value for choosing cut point distribution is used as lane line central point;According to
Lane line central point is fitted lane line.In general, cut point distribution meets two-variable normal distribution, normal distribution curve peak value is corresponding
Point can indicate lane line place-centric point, to smoothing processing after multiple lane line central point lines, can be fitted
Lane line.
It should be noted that handling lane line to obtain lane line comprising lane line in the line image of lane in the present embodiment
Three-dimensional scatterplot, the reconstruct of lane line can be drawn by three-dimensional scatterplot.The pose of the lane line image or lane line
It can be indicated based on the pose of acquisition vehicle, generally may include coordinate position, speed and course angle, according to the difference of pose,
Scatterplot under the same coordinate system is converted, the convenient splicing that lane line is carried out under same pose, the lane line of splicing and
The lane line of fitting can be characterized by three-dimensional scatterplot.
Method provided in this embodiment is corrected former based on the crowdsourcing data that cheap equipment or sensor acquire by algorithm
The not high problem of beginning data precision, can reduce acquisition equipment requirement, further decrease acquisition cost, while guarantee that lane line is drawn
That makes is accurate.
Embodiment two:
Fig. 2 is the structural schematic diagram of lane line drawing apparatus provided by Embodiment 2 of the present invention, comprising:
Acquisition module 210, for acquiring lane line data, the lane line data include lane line image, GPS data,
Inertia measurement data and mileage;
Optionally, the acquisition lane line data further include:
Acquire the lane line image, the inertia measurement data and the mileage, and for the lane line image,
The inertia measurement data and the mileage add the timestamp based on the GPS data acquisition time.
Extraction module 220 for detecting the lane line image zooming-out lane line based on deep learning, and calculates the vehicle
Three-dimensional scatterplot of the diatom under camera coordinates system;
Optionally, the three-dimensional scatterplot for calculating the lane line under camera coordinates system specifically:
If u, v is the arbitrary coordinate point under lane line image coordinate system, u0、v0Indicate the centre coordinate of lane line image,
xw、yw、zwIndicate the three-dimensional coordinate point under world coordinate system, zcIndicate the z-axis value of camera coordinates, the i.e. distance of target to camera,
F indicates that the focal length of camera, dx, dy indicate ruler of the imaging sensor on two change in coordinate axis direction under lane line image coordinate system
It is very little, then the three-dimensional point coordinate set occurrence of world coordinate system are as follows:
xw=zc(u-u0)dx/f
yw=zc(u-u0)dy/f
zw=zc
Wherein, zcIt is the depth estimation based on deep learning model as a result, world coordinate system is defined on camera coordinates system
Under, then world coordinate system definition and camera coordinates system are overlapped.
Optionally, described that the lane line image zooming-out lane line is detected based on deep learning, and calculate the lane line
Three-dimensional scatterplot under camera coordinates system further include:
When the lane line image is that binocular camera acquires, to the target carriage in the image of left mesh and the acquisition of right mesh camera
Diatom is matched, then is matched based on ORB feature operator, is calculated left mesh camera and right mesh camera parallax, is asked according to the parallax
Pick-up diatom position in camera coordinates system.
Computing module 230, for calculating the lane line image according to the inertia measurement data and the mileage
Between relative pose;
Optionally, the computing module 230 includes:
Unit is established, for establishing visual odometry according to the lane line image and inertia measurement data;
Evaluation unit, after the initial pose for calculating lane line image, view-based access control model odometer is to lane line image position
Appearance is estimated;
Computing unit, for using estimation result as the observation of Extended Kalman filter, based on the mileage and
The positioning of lane line checks lane line image pose in the line image of lane, and calculates the phase of lane line in the line image of lane
To pose.
Splicing module 240, for splicing the three-dimensional scatterplot and obtaining vehicle according to the relative pose between the lane line image
Diatom, and the lane line that splicing obtains is associated with corresponding position in the GPS data;
Optionally, the relative pose according between the lane line image splices the three-dimensional scatterplot and obtains lane line
Further include:
The GPS signal of preset quantity and the corresponding lane line three-dimensional of confidence level highest in the lane line image of acquisition is chosen to dissipate
Point carries out pose conversion to lane line three-dimensional scatterplot based on the relative pose between the lane line image.
Fitting module 250, the lane line for obtaining to splicing clusters, and is fitted the lane line after cluster.
Optionally, the fitting module 250 includes:
Selection unit, the lane line cluster for clustering lane line is transformed into cartesian coordinate system, by preset distance
Lane line cluster is split, is distributed according to cut point, the corresponding point of peak value for choosing cut point distribution is used as lane line center
Point;
Fitting unit, for being fitted lane line according to lane line central point.
Device through this embodiment can reduce lane line data acquisition cost, be based on lane line drawing, pose meter
It calculates, splicing and fitting can make up the deficiency of accurate data degree, while reducing old guarantee efficiency of equipment from algorithm.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, is not described in detail or remembers in some embodiment
The part of load may refer to the associated description of other embodiments.
Those of ordinary skill in the art will appreciate that implement the method for the above embodiments be can be with
Relevant hardware is instructed to complete by program, the program can be stored in a computer readable storage medium,
When being executed, including step S101 to S105, the storage medium includes such as to the program: ROM/RAM, magnetic disk, CD.
The above, the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although referring to before
Stating embodiment, invention is explained in detail, those skilled in the art should understand that: it still can be to preceding
Technical solution documented by each embodiment is stated to modify or equivalent replacement of some of the technical features;And these
It modifies or replaces, the spirit and scope for technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution.
Claims (10)
1. a kind of lane line method for drafting characterized by comprising
Lane line data are acquired, the lane line data include lane line image, GPS data, inertia measurement data and mileage number
According to;
The lane line image zooming-out lane line is detected based on deep learning, and calculates the lane line under camera coordinates system
Three-dimensional scatterplot;
According to the inertia measurement data and the mileage, the relative pose between the lane line image is calculated;
According to the relative pose between the lane line image, splices the three-dimensional scatterplot and obtain lane line, and splicing is obtained
Lane line is associated with the corresponding position of the GPS data;
To the lane line cluster that splicing obtains, and it is fitted the lane line after cluster.
2. the method according to claim 1, wherein the acquisition lane line data further include:
The lane line image, the inertia measurement data and the mileage are acquired, and is the lane line image, described
Inertia measurement data and the mileage add the timestamp based on the GPS data acquisition time.
3. the method according to claim 1, wherein calculate the lane line under camera coordinates system three
Tie up scatterplot specifically:
If u, v is the arbitrary coordinate point under lane line image coordinate system, u0、v0Indicate the centre coordinate of lane line image, xw、yw、
zwIndicate the three-dimensional coordinate point under world coordinate system, zcIndicate the z-axis value of camera coordinates, the i.e. distance of target to camera, f is indicated
The focal length of camera, dx, dy indicate size of the imaging sensor on two change in coordinate axis direction under lane line image coordinate system, then
The three-dimensional point coordinate set occurrence of world coordinate system are as follows:
xw=zc(u-u0)dx/f
yw=zc(u-u0)dy/f
zw=zc
Wherein, zcIt is that world coordinate system is defined under camera coordinates system based on deep learning model estimation result, then the world is sat
The definition of mark system and camera coordinates system are overlapped.
4. method according to claim 1 or 3, which is characterized in that described to detect the lane line chart based on deep learning
As extracting lane line, and calculate three-dimensional scatterplot of the lane line under camera coordinates system further include:
When the lane line image is that binocular camera acquires, to the target lane line in the image of left mesh and the acquisition of right mesh camera
It is matched, then is matched based on ORB feature operator, calculated left mesh camera and right mesh camera parallax, vehicle is sought according to the parallax
Diatom position in camera coordinates system.
5. the method according to claim 1, wherein described according to the inertia measurement data and the mileage number
According to calculating the relative pose between the lane line image specifically:
According to the lane line image and inertia measurement data, visual odometry is established;
After the initial pose for calculating lane line image, view-based access control model odometer estimates lane line image pose;
Using estimation result as the observation of Extended Kalman filter, based on lane line in the mileage and lane line image
Positioning lane line image pose is checked, and calculate the relative pose of lane line in the line image of lane.
6. the method according to claim 1, wherein the relative pose according between the lane line image,
Splice the three-dimensional scatterplot and obtain lane line further include:
Choose the GPS signal of preset quantity and the corresponding lane line three-dimensional scatterplot of confidence level highest, base in the lane line image
Relative pose between the lane line image carries out pose conversion to lane line three-dimensional scatterplot.
7. the method according to claim 1, wherein described pair splices obtained lane line cluster, and being fitted poly-
Lane line after class specifically:
The lane line cluster that lane line clusters is transformed into cartesian coordinate system, lane line cluster is divided by preset distance
It cuts, is distributed according to cut point, the corresponding point of peak value for choosing cut point distribution is used as lane line central point;
Lane line is fitted according to lane line central point.
8. a kind of lane line drawing apparatus characterized by comprising
Acquisition module, for acquiring lane line data, the lane line data include lane line image, GPS data, inertia measurement
Data and mileage;
Extraction module for detecting the lane line image zooming-out lane line based on deep learning, and calculates the lane line and exists
Three-dimensional scatterplot under camera coordinates system;
Computing module, for calculating the phase between the lane line image according to the inertia measurement data and the mileage
To pose;
Splicing module, for splicing the three-dimensional scatterplot and obtaining lane line according to the relative pose between the lane line image, and
The lane line that splicing obtains is associated with corresponding position in the GPS data;
Fitting module, the lane line for obtaining to splicing clusters, and is fitted the lane line after cluster.
9. a kind of device, including memory, processor and storage can be run in the memory and on the processor
Computer program, which is characterized in that the processor is realized when executing the computer program as appointed in claim 1 to 7
The step of one lane line method for drafting.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists
In the step of realization lane line method for drafting as described in any one of claim 1 to 7 when the computer program is executed by processor
Suddenly.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910718639.6A CN110426051B (en) | 2019-08-05 | 2019-08-05 | Lane line drawing method and device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910718639.6A CN110426051B (en) | 2019-08-05 | 2019-08-05 | Lane line drawing method and device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110426051A true CN110426051A (en) | 2019-11-08 |
CN110426051B CN110426051B (en) | 2021-05-18 |
Family
ID=68412689
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910718639.6A Active CN110426051B (en) | 2019-08-05 | 2019-08-05 | Lane line drawing method and device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110426051B (en) |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111127551A (en) * | 2020-03-26 | 2020-05-08 | 北京三快在线科技有限公司 | Target detection method and device |
CN111199567A (en) * | 2020-01-06 | 2020-05-26 | 河北科技大学 | Lane line drawing method and device and terminal equipment |
CN111209805A (en) * | 2019-12-24 | 2020-05-29 | 武汉中海庭数据技术有限公司 | Rapid fusion optimization method for multi-channel segment data of lane line crowdsourcing data |
CN112050821A (en) * | 2020-09-11 | 2020-12-08 | 湖北亿咖通科技有限公司 | Lane line polymerization method |
CN114252082A (en) * | 2022-03-01 | 2022-03-29 | 苏州挚途科技有限公司 | Vehicle positioning method and device and electronic equipment |
US11403069B2 (en) | 2017-07-24 | 2022-08-02 | Tesla, Inc. | Accelerated mathematical engine |
US11409692B2 (en) | 2017-07-24 | 2022-08-09 | Tesla, Inc. | Vector computational unit |
US11487288B2 (en) | 2017-03-23 | 2022-11-01 | Tesla, Inc. | Data synthesis for autonomous control systems |
US11537811B2 (en) | 2018-12-04 | 2022-12-27 | Tesla, Inc. | Enhanced object detection for autonomous vehicles based on field view |
US11562231B2 (en) | 2018-09-03 | 2023-01-24 | Tesla, Inc. | Neural networks for embedded devices |
US11561791B2 (en) | 2018-02-01 | 2023-01-24 | Tesla, Inc. | Vector computational unit receiving data elements in parallel from a last row of a computational array |
US11567514B2 (en) | 2019-02-11 | 2023-01-31 | Tesla, Inc. | Autonomous and user controlled vehicle summon to a target |
US11610117B2 (en) | 2018-12-27 | 2023-03-21 | Tesla, Inc. | System and method for adapting a neural network model on a hardware platform |
US11636333B2 (en) | 2018-07-26 | 2023-04-25 | Tesla, Inc. | Optimizing neural network structures for embedded systems |
CN116129389A (en) * | 2023-03-27 | 2023-05-16 | 浙江零跑科技股份有限公司 | Lane line acquisition method, computer equipment, readable storage medium and motor vehicle |
US11665108B2 (en) | 2018-10-25 | 2023-05-30 | Tesla, Inc. | QoS manager for system on a chip communications |
US11681649B2 (en) | 2017-07-24 | 2023-06-20 | Tesla, Inc. | Computational array microprocessor system using non-consecutive data formatting |
US11734562B2 (en) | 2018-06-20 | 2023-08-22 | Tesla, Inc. | Data pipeline and deep learning system for autonomous driving |
US11748620B2 (en) | 2019-02-01 | 2023-09-05 | Tesla, Inc. | Generating ground truth for machine learning from time series elements |
US11790664B2 (en) | 2019-02-19 | 2023-10-17 | Tesla, Inc. | Estimating object properties using visual image data |
US11816585B2 (en) | 2018-12-03 | 2023-11-14 | Tesla, Inc. | Machine learning models operating at different frequencies for autonomous vehicles |
US11841434B2 (en) | 2018-07-20 | 2023-12-12 | Tesla, Inc. | Annotation cross-labeling for autonomous control systems |
US11893393B2 (en) | 2017-07-24 | 2024-02-06 | Tesla, Inc. | Computational array microprocessor system with hardware arbiter managing memory requests |
US11893774B2 (en) | 2018-10-11 | 2024-02-06 | Tesla, Inc. | Systems and methods for training machine models with augmented data |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006153565A (en) * | 2004-11-26 | 2006-06-15 | Nissan Motor Co Ltd | In-vehicle navigation device and own car position correction method |
CN104766058A (en) * | 2015-03-31 | 2015-07-08 | 百度在线网络技术(北京)有限公司 | Method and device for obtaining lane line |
CN104865578A (en) * | 2015-05-12 | 2015-08-26 | 上海交通大学 | Indoor parking lot high-precision map generation device and method |
CN108256446A (en) * | 2017-12-29 | 2018-07-06 | 百度在线网络技术(北京)有限公司 | For determining the method, apparatus of the lane line in road and equipment |
CN108470159A (en) * | 2018-03-09 | 2018-08-31 | 腾讯科技(深圳)有限公司 | Lane line data processing method, device, computer equipment and storage medium |
CN108489482A (en) * | 2018-02-13 | 2018-09-04 | 视辰信息科技(上海)有限公司 | The realization method and system of vision inertia odometer |
CN108960183A (en) * | 2018-07-19 | 2018-12-07 | 北京航空航天大学 | A kind of bend target identification system and method based on Multi-sensor Fusion |
US20190065866A1 (en) * | 2017-08-22 | 2019-02-28 | TuSimple | Deep module and fitting module system and method for motion-based lane detection with multiple sensors |
CN109460739A (en) * | 2018-11-13 | 2019-03-12 | 广州小鹏汽车科技有限公司 | Method for detecting lane lines and device |
CN109902637A (en) * | 2019-03-05 | 2019-06-18 | 长沙智能驾驶研究院有限公司 | Method for detecting lane lines, device, computer equipment and storage medium |
-
2019
- 2019-08-05 CN CN201910718639.6A patent/CN110426051B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006153565A (en) * | 2004-11-26 | 2006-06-15 | Nissan Motor Co Ltd | In-vehicle navigation device and own car position correction method |
CN104766058A (en) * | 2015-03-31 | 2015-07-08 | 百度在线网络技术(北京)有限公司 | Method and device for obtaining lane line |
CN104865578A (en) * | 2015-05-12 | 2015-08-26 | 上海交通大学 | Indoor parking lot high-precision map generation device and method |
US20190065866A1 (en) * | 2017-08-22 | 2019-02-28 | TuSimple | Deep module and fitting module system and method for motion-based lane detection with multiple sensors |
CN108256446A (en) * | 2017-12-29 | 2018-07-06 | 百度在线网络技术(北京)有限公司 | For determining the method, apparatus of the lane line in road and equipment |
CN108489482A (en) * | 2018-02-13 | 2018-09-04 | 视辰信息科技(上海)有限公司 | The realization method and system of vision inertia odometer |
CN108470159A (en) * | 2018-03-09 | 2018-08-31 | 腾讯科技(深圳)有限公司 | Lane line data processing method, device, computer equipment and storage medium |
CN108960183A (en) * | 2018-07-19 | 2018-12-07 | 北京航空航天大学 | A kind of bend target identification system and method based on Multi-sensor Fusion |
CN109460739A (en) * | 2018-11-13 | 2019-03-12 | 广州小鹏汽车科技有限公司 | Method for detecting lane lines and device |
CN109902637A (en) * | 2019-03-05 | 2019-06-18 | 长沙智能驾驶研究院有限公司 | Method for detecting lane lines, device, computer equipment and storage medium |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11487288B2 (en) | 2017-03-23 | 2022-11-01 | Tesla, Inc. | Data synthesis for autonomous control systems |
US11403069B2 (en) | 2017-07-24 | 2022-08-02 | Tesla, Inc. | Accelerated mathematical engine |
US11681649B2 (en) | 2017-07-24 | 2023-06-20 | Tesla, Inc. | Computational array microprocessor system using non-consecutive data formatting |
US11893393B2 (en) | 2017-07-24 | 2024-02-06 | Tesla, Inc. | Computational array microprocessor system with hardware arbiter managing memory requests |
US11409692B2 (en) | 2017-07-24 | 2022-08-09 | Tesla, Inc. | Vector computational unit |
US11561791B2 (en) | 2018-02-01 | 2023-01-24 | Tesla, Inc. | Vector computational unit receiving data elements in parallel from a last row of a computational array |
US11797304B2 (en) | 2018-02-01 | 2023-10-24 | Tesla, Inc. | Instruction set architecture for a vector computational unit |
US11734562B2 (en) | 2018-06-20 | 2023-08-22 | Tesla, Inc. | Data pipeline and deep learning system for autonomous driving |
US11841434B2 (en) | 2018-07-20 | 2023-12-12 | Tesla, Inc. | Annotation cross-labeling for autonomous control systems |
US11636333B2 (en) | 2018-07-26 | 2023-04-25 | Tesla, Inc. | Optimizing neural network structures for embedded systems |
US11562231B2 (en) | 2018-09-03 | 2023-01-24 | Tesla, Inc. | Neural networks for embedded devices |
US11893774B2 (en) | 2018-10-11 | 2024-02-06 | Tesla, Inc. | Systems and methods for training machine models with augmented data |
US11665108B2 (en) | 2018-10-25 | 2023-05-30 | Tesla, Inc. | QoS manager for system on a chip communications |
US11816585B2 (en) | 2018-12-03 | 2023-11-14 | Tesla, Inc. | Machine learning models operating at different frequencies for autonomous vehicles |
US11537811B2 (en) | 2018-12-04 | 2022-12-27 | Tesla, Inc. | Enhanced object detection for autonomous vehicles based on field view |
US11908171B2 (en) | 2018-12-04 | 2024-02-20 | Tesla, Inc. | Enhanced object detection for autonomous vehicles based on field view |
US11610117B2 (en) | 2018-12-27 | 2023-03-21 | Tesla, Inc. | System and method for adapting a neural network model on a hardware platform |
US11748620B2 (en) | 2019-02-01 | 2023-09-05 | Tesla, Inc. | Generating ground truth for machine learning from time series elements |
US11567514B2 (en) | 2019-02-11 | 2023-01-31 | Tesla, Inc. | Autonomous and user controlled vehicle summon to a target |
US11790664B2 (en) | 2019-02-19 | 2023-10-17 | Tesla, Inc. | Estimating object properties using visual image data |
CN111209805B (en) * | 2019-12-24 | 2022-05-31 | 武汉中海庭数据技术有限公司 | Rapid fusion optimization method for multi-channel segment data of lane line crowdsourcing data |
CN111209805A (en) * | 2019-12-24 | 2020-05-29 | 武汉中海庭数据技术有限公司 | Rapid fusion optimization method for multi-channel segment data of lane line crowdsourcing data |
CN111199567B (en) * | 2020-01-06 | 2023-09-12 | 河北科技大学 | Lane line drawing method and device and terminal equipment |
CN111199567A (en) * | 2020-01-06 | 2020-05-26 | 河北科技大学 | Lane line drawing method and device and terminal equipment |
CN111127551A (en) * | 2020-03-26 | 2020-05-08 | 北京三快在线科技有限公司 | Target detection method and device |
CN112050821B (en) * | 2020-09-11 | 2021-08-20 | 湖北亿咖通科技有限公司 | Lane line polymerization method |
CN112050821A (en) * | 2020-09-11 | 2020-12-08 | 湖北亿咖通科技有限公司 | Lane line polymerization method |
CN114252082A (en) * | 2022-03-01 | 2022-03-29 | 苏州挚途科技有限公司 | Vehicle positioning method and device and electronic equipment |
CN116129389A (en) * | 2023-03-27 | 2023-05-16 | 浙江零跑科技股份有限公司 | Lane line acquisition method, computer equipment, readable storage medium and motor vehicle |
Also Published As
Publication number | Publication date |
---|---|
CN110426051B (en) | 2021-05-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110426051A (en) | A kind of lane line method for drafting, device and storage medium | |
JP7034530B2 (en) | Information processing methods, devices, and terminals | |
CN110673115B (en) | Combined calibration method, device, equipment and medium for radar and integrated navigation system | |
JP6595182B2 (en) | Systems and methods for mapping, locating, and attitude correction | |
EP3505869A1 (en) | Method, apparatus, and computer readable storage medium for updating electronic map | |
WO2020038285A1 (en) | Lane line positioning method and device, storage medium and electronic device | |
WO2018227980A1 (en) | Camera sensor based lane line map construction method and construction system | |
CN108694882A (en) | Method, apparatus and equipment for marking map | |
CN110930495A (en) | Multi-unmanned aerial vehicle cooperation-based ICP point cloud map fusion method, system, device and storage medium | |
CN106461402A (en) | Method and system for determining a position relative to a digital map | |
CN110609290A (en) | Laser radar matching positioning method and device | |
CN107688184A (en) | A kind of localization method and system | |
CN102472612A (en) | Three-dimensional object recognizing device and three-dimensional object recognizing method | |
WO2021051344A1 (en) | Method and apparatus for determining lane lines in high-precision map | |
CN111915675B (en) | Particle drift-based particle filtering point cloud positioning method, device and system thereof | |
CN103644904A (en) | Visual navigation method based on SIFT (scale invariant feature transform) algorithm | |
CN111912416A (en) | Method, device and equipment for positioning equipment | |
CN109903330A (en) | A kind of method and apparatus handling data | |
KR102115004B1 (en) | Apparatus and method for generating three dimensional map using aerial images | |
CN116052155A (en) | Point cloud data processing method and system | |
CN113945937A (en) | Precision detection method, device and storage medium | |
CN111982133B (en) | Method and device for positioning vehicle based on high-precision map and electronic equipment | |
WO2022199195A1 (en) | Map updating method and system, vehicle-mounted terminal, server, and storage medium | |
CN114485698A (en) | Intersection guide line generating method and system | |
JP2022542082A (en) | Pose identification method, pose identification device, computer readable storage medium, computer equipment and computer program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |