CN109426800B - Lane line detection method and device - Google Patents

Lane line detection method and device Download PDF

Info

Publication number
CN109426800B
CN109426800B CN201810688772.7A CN201810688772A CN109426800B CN 109426800 B CN109426800 B CN 109426800B CN 201810688772 A CN201810688772 A CN 201810688772A CN 109426800 B CN109426800 B CN 109426800B
Authority
CN
China
Prior art keywords
lane line
data
detection result
current
line detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810688772.7A
Other languages
Chinese (zh)
Other versions
CN109426800A (en
Inventor
刘思远
王明东
侯晓迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tusimple Inc
Original Assignee
Tusimple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/683,463 external-priority patent/US10373003B2/en
Priority claimed from US15/683,494 external-priority patent/US10482769B2/en
Application filed by Tusimple Inc filed Critical Tusimple Inc
Publication of CN109426800A publication Critical patent/CN109426800A/en
Application granted granted Critical
Publication of CN109426800B publication Critical patent/CN109426800B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a lane line detection method and a lane line detection device, which aim to solve the problem of inaccurate positioning of a lane line detection scheme in the prior art. The method comprises the following steps: the method comprises the steps that a lane line detection device obtains current sensing data of a driving environment of a vehicle; the current perception data comprises current frame image data and current positioning data; acquiring lane line template data; the lane line template data is lane line detection result data obtained by last lane line detection processing; extracting current lane line image data according to the perception data; determining to obtain current lane line detection result data according to the lane line image data and the lane line template data; the current lane line detection result data includes data expressing the relative position relationship between the vehicle and the lane line.

Description

Lane line detection method and device
Technical Field
The invention relates to the field of computer vision, in particular to a lane line detection method and a lane line detection device.
Background
Currently, one of the main research points of Advanced Driver Assistance Systems (ADAS) is to improve the safety of the vehicle itself or the driving of the vehicle and to reduce road accidents. Intelligent vehicles and unmanned vehicles are expected to solve the problems of road safety, traffic problems and passenger comfort. Lane line detection is a complex and challenging task in research tasks for smart vehicles or unmanned vehicles. The lane line is used as a main part of a road and plays a role in providing reference for the unmanned vehicle and guiding safe driving. The lane line detection includes road positioning, a relative positional relationship between the vehicle and the road, and a traveling direction of the vehicle.
In the current technical solution, lane line detection is usually implemented according to images acquired by a camera and a positioning signal provided by a GPS device. However, the accuracy of the lane line and the position information of the lane line and the relative position information between the lane line and the vehicle determined by this method is low, and the driving demand of the autonomous vehicle cannot be satisfied. That is, the problem of low positioning accuracy exists in the existing technical scheme of lane line detection.
Disclosure of Invention
In view of this, embodiments of the present invention provide a lane line detection method and apparatus, so as to solve the problem of inaccurate positioning in the existing lane line detection technology.
In one aspect, an embodiment of the present application provides a lane line detection method, including:
the method comprises the steps that a lane line detection device obtains current sensing data of a driving environment of a vehicle; the current perception data comprises current frame image data and current positioning data;
acquiring lane line template data; the lane line template data is lane line detection result data obtained by last lane line detection processing;
extracting current lane line image data according to the perception data;
determining to obtain current lane line detection result data according to the lane line image data and the lane line template data; the current lane line detection result data includes data expressing the relative position relationship between the vehicle and the lane line.
On the other hand, the embodiment of the present application provides a lane line detection device, including:
the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring current perception data of the driving environment of the admixture, and the current perception data comprises current frame image data and positioning data; acquiring lane line template data, wherein the lane line template data is lane line detection result data obtained by last lane line detection processing;
the extraction unit is used for extracting current lane line image data according to the perception data;
the determining unit is used for determining and obtaining the current lane line detection result data according to the lane line image data and the lane line template data; the current lane line detection result data includes data expressing the relative position relationship between the vehicle and the lane line.
In another aspect, an embodiment of the present application provides a lane line detection apparatus, including a processor and at least one memory, where the at least one memory stores at least one machine executable instruction, and the processor executes the at least one machine executable instruction to perform:
acquiring current perception data of a driving environment of a vehicle; the current perception data comprises current frame image data and current positioning data;
acquiring lane line template data; the lane line template data is lane line detection result data obtained by last lane line detection processing;
extracting current lane line image data according to the perception data;
determining to obtain current lane line detection result data according to the lane line image data and the lane line template data; the current lane line detection result data includes data expressing the relative position relationship between the vehicle and the lane line.
According to the technical scheme provided by the embodiment of the invention, the lane line detection device acquires the current sensing data of the driving environment of the lane, extracts the lane line image data from the current sensing data, receives the lane line detection result data (namely, the lane line template data) obtained by the last lane line detection processing, and determines to obtain the current lane line detection result data according to the current lane line image data and the last lane line detection result data. Because the last lane line detection result data comprises more accurate lane line positioning information, positioning reference information can be provided for the current lane line detection processing. Compared with the prior art, the lane line detection is carried out only by means of the currently acquired sensing data, more accurate lane line detection can be carried out, more accurate positioning information is determined, and therefore the problem that the lane line detection scheme in the prior art is inaccurate in positioning is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention.
Fig. 1 is a processing flow chart of a lane line detection method according to an embodiment of the present disclosure;
FIG. 2a is an example of lane line image data;
fig. 2b is another processing flow chart of the lane line detection method according to the embodiment of the present disclosure;
FIG. 3a is a flowchart of the process of step 104 in FIG. 1 or FIG. 2;
fig. 3b is another processing flow chart of the lane line detection method according to the embodiment of the present disclosure;
fig. 4 is another processing flow chart of the lane line detection method according to the embodiment of the present application;
FIG. 5 is a flowchart of the process of step 105 in FIG. 4;
FIG. 6a is a flowchart of the process of step 106 in FIG. 4;
FIG. 6b is another flowchart of the process of step 106 in FIG. 4;
FIG. 7 is an example image;
FIG. 8 is a diagram illustrating an example of the expanded lane line of step 1061 in FIG. 6 a;
FIG. 9 is a schematic view of the expanded lane line of FIG. 8 after adjustment;
fig. 10 is a block diagram of a lane line detection apparatus according to an embodiment of the present application;
fig. 11 is another structural block diagram of a lane line detection apparatus according to an embodiment of the present application;
fig. 12 is another block diagram of the lane line detection apparatus according to the embodiment of the present application;
fig. 13 is another structural block diagram of the lane line detection apparatus according to the embodiment of the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the technical solution in the embodiment of the present invention will be clearly and completely described below with reference to the drawings in the embodiment of the present invention, and it is obvious that the described embodiment is only a part of the embodiment of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Aiming at the problem of inaccurate positioning in the technical scheme of lane line detection in the prior art, the embodiment of the application provides a lane line detection method and a lane line detection device, which are used for solving the problem.
In the lane detection scheme provided in the embodiment of the present application, the lane detection apparatus obtains current sensing data of a driving environment of a lane, extracts lane image data from the current sensing data, receives lane detection result data (i.e., lane template data) obtained by previous lane detection processing, and determines to obtain current lane detection result data according to the current lane image data and the previous lane detection result data. Because the last lane line detection result data comprises more accurate lane line positioning information, positioning reference information can be provided for the current lane line detection processing. Compared with the prior art, the lane line detection is carried out only by means of the currently acquired sensing data, more accurate lane line detection can be carried out, more accurate positioning information is determined, and therefore the problem that the lane line detection scheme in the prior art is inaccurate in positioning is solved.
The foregoing is the core idea of the present invention, and in order to make the technical solutions in the embodiments of the present invention better understood and make the above objects, features and advantages of the embodiments of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention are further described in detail with reference to the accompanying drawings.
Fig. 1 shows a processing flow of a lane line detection method provided in an embodiment of the present application, where the method includes the following processing procedures:
step 101, acquiring current sensing data of a driving environment of a vehicle by a lane line detection device; the current perception data comprises current frame image data and current positioning data;
102, acquiring lane line template data; the lane line template data is lane line detection result data obtained by last lane line detection processing;
103, extracting current lane line image data according to the perception data;
104, determining to obtain current lane line detection result data according to the lane line image data and the lane line template; the current lane line detection result data includes data expressing the relative position relationship between the vehicle and the lane line.
Wherein, the execution sequence of the step 102 and the step 103 is not in sequence.
The above-described implementation is described in detail below.
In the above step 101, the current perception data of the driving environment of the vehicle may be acquired by a perception device mounted on the vehicle. For example, at least one vehicle-mounted camera acquires at least one frame of current frame image data, and a positioning device acquires current positioning data, wherein the positioning device includes a Global Positioning System (GPS) and/or an Inertial Measurement Unit (IMU). The perception data may further include map data of the current driving environment, laser radar (LIDAR) data. The map data may be real map data acquired in advance, or map data provided by a Simultaneous Localization and Mapping unit (SLAM) of the vehicle.
In step 102, the lane line template data is lane line detection result data obtained in the previous lane line detection process, and includes lane line position information and data of relative positional relationship between the vehicle and the lane line. The lane line template data (i.e., lane line detection result data) may be expressed as 3D space data of a top view angle, for example, a direction perpendicular to a vehicle traveling direction is an X axis with the vehicle traveling direction as a Y axis in a coordinate system.
After the lane line template data is obtained in the last lane line detection process, the lane line template data may be stored in a storage device, which may be a local storage of the lane line detection apparatus, another storage in the vehicle, or a remote storage.
The lane line detection device may read the lane line template data from the storage device in the current lane line detection process, or may receive the lane line template data in a predetermined processing cycle.
In the embodiment of the application, because the lane line detection result data obtained by the last lane line detection processing includes the position information of the lane line and the relative position relationship between the vehicle and the lane line, in the obtained two adjacent frames of image data, the change of the position of the lane line is small, and the lane line has continuity and stability, and the relative position relationship between the vehicle and the lane line is relatively stable and changes continuously.
In the step 103, the process of extracting the current lane line image data according to the sensing data may be implemented in various ways.
In a first mode, a semantic segmentation method is adopted for current frame image data respectively acquired by at least one camera, an algorithm or a model obtained through pre-training is used for carrying out classification marking on pixels of the current frame image data, and current lane line image data is extracted from the pixels.
The algorithm or model obtained by pre-training may be obtained by performing iterative training on the neural network according to real data (ground route) of the driving environment and image data obtained by the camera.
In a second mode, the current lane line image data can also be extracted and obtained in an object recognition mode according to the current frame image data and the current positioning data.
An example of one lane line image data is shown in fig. 2 a.
The present application only lists the above two methods for extracting lane line image data, and may also obtain current lane line image data according to other processing methods, which are not strictly specified in the present application.
In some embodiments of the present application, since the result of the previous lane line detection process is not necessarily fully consistent with the prior knowledge or the common reason, the lane line template data needs to be further adjusted before performing step 104, as shown in fig. 2b, which includes:
step 104S, adjusting the lane lines in the lane line template data according to the current sensing data, the prior knowledge and/or the preset constraint conditions; wherein the a priori knowledge or predetermined constraints comprise physical metric parameters or data expressions regarding the road structure.
For example, a priori knowledge or constraints may include: (1) the lane lines on the road are parallel to each other; (2) the curved lane line is an arc line; (3) the length of the curve lane line is less than 300 meters; (4) the distance between adjacent lane lines is between 3 and 4 meters, for example, about 3.75 meters; (5) the color of the lane line is different from the color of the other parts of the road. The a priori knowledge or the constraint condition may also include other contents or data according to the needs of a specific application scenario, and the embodiments of the present application are not limited to strict specific limitations.
Through the adjustment, the lane line template data can be adjusted to be more in line with the priori knowledge or the conventional principle, and more accurate positioning reference information is provided for determining the current lane line detection result data.
The processing in step 104 may specifically be to map the lane line template data to the lane line image data, and obtain current lane line detection result data according to the mapping result, and the processing may be implemented in various ways.
For example, the lane line template data and the lane line image data are subjected to coordinate conversion, the lane line template data subjected to the coordinate conversion is projected into the lane line image data subjected to the coordinate conversion, and current lane line detection result data is obtained by fitting according to a predetermined formula or algorithm and a projection result. The processing of step 104 may also be implemented in other ways.
In addition to the foregoing implementation, an embodiment of the present application provides an implementation based on machine learning, as shown in fig. 3a, specifically including:
step 1041, inputting the lane line image data and the lane line template data into a predetermined Loss Function (Loss Function), wherein the Loss Function outputs a cost value; wherein the loss function is a function expressing a positional relationship of a lane line between the lane line template data and the lane line image data, and the cost value is a distance between the lane line in the lane line template data and the lane line in the lane line image data;
step 1042, iteratively modifying the position of the lane line in the lane line template data under the condition that the difference value of the two adjacent cost values is greater than a preset threshold value; and under the condition that the difference value of the two adjacent cost values is less than or equal to the preset threshold value, finishing the iterative processing and obtaining the current lane line detection result data.
The operation of iteratively modifying the position of the lane line in the lane line template data can be realized by a gradient descent algorithm.
Further, in some embodiments of the present application, the loss function may be optimized according to the continuously accumulated lane line detection result data, so as to enhance the accuracy, stability and robustness of the loss function.
By processing as shown in fig. 3a, the distance between the lane line in the lane line template data and the lane line in the lane line image data is continuously measured by the loss function, and the lane line in the lane line template data is continuously fitted to the lane line in the lane line image data by the gradient descent algorithm, so that a more accurate current lane line detection result can be obtained, which may be 3D spatial data of a top view angle, and includes data expressing a relative positional relationship between the vehicle and the lane line, and may also include data expressing a position of the lane line.
Through the lane line detection processing shown in fig. 1, more accurate positioning reference information of the lane line and the vehicle can be obtained by adopting the result data of the last lane line detection, and the result data of the last lane line detection is projected into the current lane line image data to be fitted to obtain the current lane line detection result data, so that more accurate positioning information of the current lane line and the vehicle can be obtained, and the problem that more accurate lane line detection cannot be performed in the prior art can be solved.
In addition, in the embodiment of the application, the sensing data further includes various data, such as map data, which can further provide more accurate positioning information for the lane line detection processing, so as to obtain lane line detection result data with higher accuracy.
Further, in some embodiments, as shown in step 104t of fig. 3b, the current lane line detection result data obtained through the above-described processing is determined as lane line template data of the next lane line detection processing.
Alternatively, in other embodiments, the lane marking detection result data obtained in step 104 may be further checked and optimally adjusted to ensure that lane marking template data with more accurate positioning information is provided for the next lane marking detection process.
FIG. 4 illustrates a lane line inspection and optimization process following the method illustrated in FIG. 1, including:
105, checking the current lane line detection result data;
106, under the condition of passing the inspection, optimizing and adjusting the current lane line detection result data to obtain lane line template data for next lane line detection processing; in the case of a failed verification, the current lane line detection result data is discarded.
As shown in fig. 5, step 105 includes the following processing steps:
step 1051, determining the Confidence (Confidence) of the current lane line detection result data according to a Confidence Model (Confidence Model) obtained by pre-training;
specifically, the current lane line detection result data may be provided as an input to a confidence model, which outputs a confidence corresponding to the current lane line detection result data.
The confidence model is obtained by training a deep neural network in advance according to historical lane line detection result data and lane line real data (ground route). The confidence coefficient model is used for representing the corresponding relation between the lane line detection result data and the confidence coefficient.
In the process of training the deep neural network according to historical lane line detection result data and real data of a lane line, comparing the historical lane line detection result data with the real data; and classifying or labeling the historical lane line detection results according to the comparison results, for example, labeling the detection result data a, c and d as successful detection results, and labeling the detection result data b and e as failed detection results. And training a neural network according to the labeled historical lane line detection result data and the real data to obtain a confidence coefficient model. The trained confidence model can reflect the success probability or failure probability (i.e. confidence) of the lane line detection result data.
Step 1052, under the condition that the obtained confidence coefficient is determined to meet the preset detection condition, the detection is successful; in the case where it is determined that the obtained confidence does not meet the predetermined test condition, the test fails.
For example, the test conditions may include: in the case of a success probability with a confidence greater than or equal to X%, the test is determined to be successful, otherwise the test fails.
Further, in some embodiments of the present application, the confidence model may also be optimally trained according to the continuously accumulated lane line detection result data and the lane line real data, and the processing procedure of the optimal training is similar to the processing of the confidence model obtained by training, and is not described here again.
As shown in fig. 6a, step 106 is the following process:
step 1061, under the condition of passing the inspection, expanding the lane lines in the current lane line detection result data;
specifically, the process of expanding the lane line may include:
s1, copying and translating the lane lines at the edges according to the lane line structure in the lane line detection result data;
step S2, under the condition that the lane line detection result data can include the copied and translated lane line, keeping the copied and translated lane line and storing new lane line detection result data;
in step S3, when the duplicated and translated lane line cannot be included in the lane line detection result data, the duplicated and translated lane line is discarded.
For example, as shown in fig. 7, the current lane line detection result data includes two lane lines, CL1 and CL2, and new lane lines EL1 and EL2 can be obtained by expanding the lane lines. Fig. 8 is an expanded lane line displayed on the lane line image data.
And 1062, adjusting the lane lines in the lane line template data according to the current sensing data, the prior knowledge and/or the preset constraint conditions to obtain lane line template data for next lane line detection processing.
The adjustment process may refer to step 104S described above.
Fig. 9 shows an example in which the lane line EL2 is adjusted for the expanded lane line shown in fig. 8 to obtain an adjusted lane line EL2 ', and the adjusted lane line EL 2' is closer to a straight line than the lane line EL2 before the adjustment.
In some embodiments of the present application, step 104S and step 1062 may be provided simultaneously. In other embodiments of the present application, one of step 104S and step 1062 may be provided.
Step 1063, discarding the current lane line detection result data when the detection fails.
Further, as shown in step 1064 of fig. 6b, after discarding the current lane line detection result data, a preset lane line template data is determined as the lane line template data for the next lane line detection process. The lane line template data may be general lane line template data, lane line template data corresponding to a type of driving environment, or lane line template data of a specific driving environment. For example, the lane line template data may be one applicable to all environments, or may be lane line template data of one highway environment, lane line template data of an urban road, or lane line template data of a specific road where a vehicle is located. The preset lane line template data can be specifically set according to the requirements of specific application scenarios.
The preset lane line template data may be pre-stored locally in the lane line detection device, may be pre-stored in the automatic driving processing device of the vehicle, or may be stored in a remote server. When the lane line detection device needs to acquire the preset lane line template data, the lane line detection device can acquire the preset lane line template data in a reading or remote request and receiving mode.
Through the optimization adjustment processing shown in fig. 4, the embodiment of the present application can obtain lane line template data containing more accurate positioning information; compared with the lane line template data obtained by the method shown in fig. 1, the lane line template data obtained by the method shown in fig. 4 enables the lane line detection method provided by the embodiment of the application to have higher stability and robustness.
Based on the same inventive concept, the embodiment of the application also provides a lane line detection device.
Fig. 10 is a block diagram illustrating a structure of a lane line detection apparatus according to an embodiment of the present application, where the lane line detection apparatus includes:
the system comprises an acquisition unit 11, a processing unit and a display unit, wherein the acquisition unit is used for acquiring current perception data of the driving environment of a vehicle, and the current perception data comprises current frame image data and positioning data; acquiring lane line template data, wherein the lane line template data is lane line detection result data obtained by last lane line detection processing;
the perception data further comprises at least one of the following: map data, LIDAR (LIDAR) data of a current driving environment; the positioning data comprises GPS positioning data and/or inertial navigation positioning data;
the extraction unit 12 is used for extracting current lane line image data according to the perception data;
a determining unit 13, configured to determine to obtain current lane line detection result data according to the lane line image data and the lane line template data; the current lane line detection result data includes data expressing the relative position relationship between the vehicle and the lane line.
The lane line template data and the lane line detection result data are 3D space data of an overlooking angle.
In some embodiments, the extracting unit 12 extracts lane line image data from the current frame image data, including: and extracting lane line image data from the current frame image data according to an object recognition method or semantic segmentation.
The determining unit 13 determines to obtain current lane line detection result data according to the lane line image data and the lane line template data, and includes: and mapping the lane line template data to the lane line image data, and fitting according to the mapping result to obtain the current lane line detection result data. Further, the determination unit 13 inputs the lane line image data and the lane line template data into a predetermined loss function, which outputs a cost value; wherein the loss function is a function expressing a positional relationship of a lane line between the lane line template data and the lane line image data, and the cost value is a distance between the lane line in the lane line template data and the lane line in the lane line image data; iteratively modifying the position of the lane line in the lane line template data under the condition that the difference value of the two adjacent cost values is greater than a preset threshold value; and under the condition that the difference value of the two adjacent cost values is less than or equal to the preset threshold value, finishing the iterative processing and obtaining the current lane line detection result data. In some application scenarios, the determination unit 13 iteratively modifies the position of the lane line in the lane line template data using a gradient descent algorithm.
Before the determining unit 13 determines to obtain the current lane line detection result data according to the lane line image data and the lane line template data, the determining unit 13 further adjusts the lane line template data, including: adjusting the lane lines in the lane line template data according to the current sensing data, the prior knowledge and/or the preset constraint conditions; wherein the a priori knowledge or predetermined constraints comprise object metric parameters or data representations relating to the road structure.
Further, the determination unit 13 is also configured to determine the current lane line detection result data as lane line template data for the next lane line detection process.
In other embodiments, the lane line detection apparatus may further include, as shown in fig. 11:
a checking unit 14 for checking the current lane line detection result data;
the optimization unit 15 is used for optimizing and adjusting the current lane line detection result data under the condition that the inspection unit 14 passes the inspection, so as to obtain lane line template data for the next lane line detection processing; in the case of a failed verification, the current lane line detection result data is discarded.
The inspection unit 14 inspects the current lane line detection result data, and includes: determining the confidence coefficient of the current lane line detection result data according to a confidence coefficient model obtained by pre-training; under the condition that the obtained confidence coefficient is determined to accord with the preset detection condition, the detection is successful; in the case where it is determined that the obtained confidence does not meet the predetermined test condition, the test fails.
Further, as shown in fig. 12, the lane line detection apparatus provided in the embodiment of the present application may further include: the pre-training unit 16 is used for training the deep neural network to obtain a confidence model according to historical lane line detection result data and lane line real data in advance; the confidence coefficient model is used for representing the corresponding relation between the lane line detection result data and the confidence coefficient.
The optimizing unit 15 optimizes and adjusts the current lane line detection result data, and includes: expanding the lane lines in the current lane line detection result data; adjusting the lane lines in the lane line template data according to the current sensing data, the prior knowledge and/or the preset constraint conditions to obtain lane line template data for the next lane line detection processing; wherein the a priori knowledge or predetermined constraints comprise physical metric parameters or data expressions regarding the road structure.
The optimization unit 15 expands the lane line in the current lane line detection result data, and includes: copying and translating the edge lane lines in the lane line detection result data according to the lane line structure in the lane line detection result data; under the condition that the lane line detection result data can include the copied and translated lane line, keeping the copied and translated lane line, and storing new lane line detection result data; and when the copied and translated lane line cannot be included in the lane line detection result data, abandoning the copied and translated lane line.
Further, the optimizing unit 15 is further configured to determine a preset lane line template data as the lane line template data for the next lane line detection processing after discarding the current lane line detection result data.
Through the lane line detection device that this application embodiment provided, adopt the last lane line to detect the result data, can obtain the location reference information of comparatively accurate lane line and vehicle, and with the last lane line result data projection to current lane line image data that detects, the fitting obtains current lane line testing result data, can obtain the location information of comparatively accurate current lane line and vehicle, thereby can solve among the prior art and can't carry out the problem that comparatively accurate lane line detected.
Based on the same inventive concept, the embodiment of the application also provides a lane line detection device.
As shown in fig. 13, the lane line detection apparatus provided in the embodiment of the present application includes a processor 131 and at least one memory 132, where the at least one memory stores at least one machine executable instruction, and the processor executes the at least one machine executable instruction to perform:
acquiring current perception data of a driving environment of a vehicle; the current perception data comprises current frame image data and current positioning data;
acquiring lane line template data; the lane line template data is lane line detection result data obtained by last lane line detection processing;
extracting current lane line image data according to the perception data;
determining to obtain current lane line detection result data according to the lane line image data and the lane line template data; the current lane line detection result data includes data expressing the relative position relationship between the vehicle and the lane line.
The lane line template data and the lane line detection result data are 3D space data of an overlooking angle. The perception data further comprises at least one of the following: map data, LIDAR (LIDAR) data of a current driving environment; the positioning data includes GPS positioning data and/or inertial navigation positioning data.
In some embodiments, the processor 131 executing at least one machine executable instruction performs extracting lane line image data from the current frame image data, including: and extracting the lane line image data from the current frame image data according to an object recognition method or a semantic segmentation method.
The processor 131 executes at least one machine executable instruction to determine to obtain current lane line detection result data according to the lane line image data and the lane line template data, including: and mapping the lane line template data to the lane line image data, and fitting according to the mapping result to obtain the current lane line detection result data. The processing may specifically include: inputting the lane line image data and the lane line template data into a predetermined loss function, the loss function outputting a cost value; wherein the loss function is a function expressing a positional relationship of a lane line between the lane line template data and the lane line image data, and the cost value is a distance between the lane line in the lane line template data and the lane line in the lane line image data; iteratively modifying the position of the lane line in the lane line template data under the condition that the difference value of the two adjacent cost values is greater than a preset threshold value; and under the condition that the difference value of the two adjacent cost values is less than or equal to the preset threshold value, finishing the iterative processing and obtaining the current lane line detection result data. In some application scenarios, processor 131 may execute at least one machine executable instruction to perform: and iteratively modifying the position of the lane line in the lane line template data by adopting a gradient descent algorithm.
The processor 131 executes at least one machine executable instruction to perform the following steps before determining to obtain the current lane line detection result data according to the lane line image data and the lane line template data: adjusting the lane lines in the lane line template data according to the current sensing data, the prior knowledge and/or the preset constraint conditions; wherein the a priori knowledge or predetermined constraints comprise physical metric parameters or data expressions regarding the road structure.
The processor executing the at least one machine executable instruction further performs: and determining the current lane line detection result data as lane line template data for next lane line detection processing.
In other embodiments, execution of the at least one machine executable instruction by processor 131 further performs: checking the current lane line detection result data; under the condition of passing the inspection, optimizing and adjusting the current lane line detection result data to obtain lane line template data for next lane line detection processing; in the case of a failed verification, the current lane line detection result data is discarded.
Processor 131 executes at least one machine executable instruction to perform a verification of current lane marking detection result data, comprising: determining the confidence coefficient of the current lane line detection result data according to a confidence coefficient model obtained by pre-training; under the condition that the obtained confidence coefficient is determined to accord with the preset detection condition, the detection is successful; in the case where it is determined that the obtained confidence does not meet the predetermined test condition, the test fails.
Processor 131 executing the at least one machine-executable instruction further performs pre-training to obtain a confidence model, comprising: training a deep neural network to obtain a confidence model according to historical lane line detection result data and lane line real data in advance; the confidence coefficient model is used for representing the corresponding relation between the lane line detection result data and the confidence coefficient.
Processor 131 executes at least one machine executable instruction to perform an optimization adjustment on the current lane line detection result data, including: expanding the lane lines in the current lane line detection result data; adjusting the lane lines in the lane line template data according to the current sensing data, the prior knowledge and/or the preset constraint conditions to obtain lane line template data for the next lane line detection processing; wherein the a priori knowledge or predetermined constraints comprise physical metric parameters or data expressions regarding the road structure.
Processor 131 executes at least one machine executable instruction to perform expanding lane lines in the current lane line detection result data, including: copying and translating the edge lane lines in the lane line detection result data according to the lane line structure in the lane line detection result data; under the condition that the lane line detection result data can include the copied and translated lane line, keeping the copied and translated lane line, and storing new lane line detection result data; and when the copied and translated lane line cannot be included in the lane line detection result data, abandoning the copied and translated lane line.
The processor 131 executes at least one machine executable instruction to determine a preset lane line template data as a lane line template data for a next lane line detection process after discarding the current lane line detection result data.
Through the lane line detection device that this application embodiment provided, adopt the last lane line to detect the result data, can obtain the location reference information of comparatively accurate lane line and vehicle, and with the last lane line result data projection to current lane line image data that detects, the fitting obtains current lane line testing result data, can obtain the location information of comparatively accurate current lane line and vehicle, thereby can solve among the prior art and can't carry out the problem that comparatively accurate lane line detected.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (39)

1. A lane line detection method is characterized by comprising the following steps:
the method comprises the steps that a lane line detection device obtains current sensing data of a driving environment of a vehicle; the current perception data comprises current frame image data and current positioning data;
acquiring lane line template data; the lane line template data is lane line detection result data obtained by the previous frame of lane line detection processing;
extracting current lane line image data according to the perception data;
determining to obtain current lane line detection result data according to the lane line image data and the lane line template data; the current lane line detection result data comprises data expressing the relative position relationship between the vehicle and the lane line;
determining to obtain current lane line detection result data according to the lane line image data and the lane line template data, wherein the method comprises the following steps:
mapping the lane line template data to lane line image data, and fitting according to a mapping result to obtain current lane line detection result data;
the method for mapping the lane line template data to the lane line image data and obtaining the current lane line detection result data according to the mapping result comprises the following steps:
inputting the lane line image data and the lane line template data into a predetermined loss function, the loss function outputting a cost value; wherein the loss function is a function expressing a positional relationship of a lane line between the lane line template data and the lane line image data, and the cost value is a distance between the lane line in the lane line template data and the lane line in the lane line image data;
iteratively modifying the position of the lane line in the lane line template data under the condition that the difference value of the two adjacent cost values is greater than a preset threshold value; and under the condition that the difference value of the two adjacent cost values is less than or equal to the preset threshold value, finishing the iterative processing and obtaining the current lane line detection result data.
2. The method of claim 1, wherein iteratively modifying the position of the lane line in the lane line template data comprises:
and iteratively modifying the position of the lane line in the lane line template data by adopting a gradient descent algorithm.
3. The method of claim 1, wherein before determining that current lane line detection result data is obtained based on the lane line image data and the lane line template data, the method further comprises:
adjusting the lane lines in the lane line template data according to the current sensing data, the prior knowledge and/or the preset constraint conditions;
wherein the a priori knowledge or predetermined constraints comprise physical metric parameters or data expressions regarding the road structure.
4. The method of claim 1, further comprising:
checking the current lane line detection result data;
under the condition of passing the inspection, optimizing and adjusting the current lane line detection result data to obtain lane line template data for next lane line detection processing; in the case of a failed verification, the current lane line detection result data is discarded.
5. The method of claim 4, wherein the checking the current lane marking detection result data comprises:
determining the confidence coefficient of the current lane line detection result data according to a confidence coefficient model obtained by pre-training;
under the condition that the obtained confidence coefficient is determined to accord with the preset detection condition, the detection is successful; in the case where it is determined that the obtained confidence does not meet the predetermined test condition, the test fails.
6. The method of claim 5, further comprising pre-training a confidence model comprising:
training a deep neural network to obtain a confidence model according to historical lane line detection result data and lane line real data in advance; the confidence coefficient model is used for representing the corresponding relation between the lane line detection result data and the confidence coefficient.
7. The method of claim 4, wherein the optimally adjusting the current lane marking data comprises:
expanding the lane lines in the current lane line detection result data;
adjusting the lane lines in the lane line template data according to the current sensing data, the prior knowledge and/or the preset constraint conditions to obtain lane line template data for the next lane line detection processing; wherein the a priori knowledge or predetermined constraints comprise physical metric parameters or data expressions regarding the road structure.
8. The method of claim 7, wherein expanding the lane lines in the current lane line detection result data comprises:
copying and translating the edge lane lines in the lane line detection result data according to the lane line structure in the lane line detection result data;
under the condition that the lane line detection result data can include the copied and translated lane line, keeping the copied and translated lane line, and storing new lane line detection result data;
and when the copied and translated lane line cannot be included in the lane line detection result data, abandoning the copied and translated lane line.
9. The method according to claim 4, wherein after discarding the current lane line detection result data, a preset lane line template data is determined as the lane line template data for the next lane line detection process.
10. The method of claim 1, further comprising:
and determining the current lane line detection result data as lane line template data for next lane line detection processing.
11. The method according to claim 1, wherein the lane line template data and the lane line detection result data are 3D spatial data of an overhead angle.
12. The method of claim 1, wherein extracting lane line image data from the current frame image data comprises:
and extracting the lane line image data from the current frame image data according to an object recognition method or a semantic segmentation method.
13. The method of claim 1, wherein the perception data further comprises at least one of: map data, LIDAR (LIDAR) data of a current driving environment;
the positioning data includes GPS positioning data and/or inertial navigation positioning data.
14. A lane line detection apparatus, comprising:
the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring current perception data of the driving environment of a vehicle, and the current perception data comprises current frame image data and positioning data; acquiring lane line template data, wherein the lane line template data is lane line detection result data obtained by the previous frame of lane line detection processing;
the extraction unit is used for extracting current lane line image data according to the perception data;
the determining unit is used for determining and obtaining the current lane line detection result data according to the lane line image data and the lane line template data; the current lane line detection result data comprises data expressing the relative position relationship between the vehicle and the lane line;
the determining unit determines to obtain current lane line detection result data according to the lane line image data and the lane line template data, and the determining unit comprises the following steps:
mapping the lane line template data to lane line image data, and fitting according to a mapping result to obtain current lane line detection result data;
the determining unit maps the lane line template data to the lane line image data, and obtains current lane line detection result data according to the mapping result, including:
inputting the lane line image data and the lane line template data into a predetermined loss function, the loss function outputting a cost value; wherein the loss function is a function expressing a positional relationship of a lane line between the lane line template data and the lane line image data, and the cost value is a distance between the lane line in the lane line template data and the lane line in the lane line image data;
iteratively modifying the position of the lane line in the lane line template data under the condition that the difference value of the two adjacent cost values is greater than a preset threshold value; and under the condition that the difference value of the two adjacent cost values is less than or equal to the preset threshold value, finishing the iterative processing and obtaining the current lane line detection result data.
15. The apparatus of claim 14, wherein the determination unit iteratively modifies the position of the lane line in the lane line template data, comprising:
and iteratively modifying the position of the lane line in the lane line template data by adopting a gradient descent algorithm.
16. The apparatus of claim 14, wherein the determining module, before determining to obtain the current lane line detection result data according to the lane line image data and the lane line template data, is further configured to:
adjusting the lane lines in the lane line template data according to the current sensing data, the prior knowledge and/or the preset constraint conditions;
wherein the a priori knowledge or predetermined constraints comprise object metric parameters or data representations relating to the road structure.
17. The apparatus of claim 14, further comprising:
the inspection unit is used for inspecting the current lane line detection result data;
the optimization unit is used for optimizing and adjusting the current lane line detection result data under the condition that the inspection unit passes the inspection, so that lane line template data used for next lane line detection processing is obtained; in the case of a failed verification, the current lane line detection result data is discarded.
18. The apparatus of claim 17, wherein the checking unit checks the current lane line detection result data, including:
determining the confidence coefficient of the current lane line detection result data according to a confidence coefficient model obtained by pre-training;
under the condition that the obtained confidence coefficient is determined to accord with the preset detection condition, the detection is successful; in the case where it is determined that the obtained confidence does not meet the predetermined test condition, the test fails.
19. The apparatus of claim 18, further comprising:
the pre-training unit is used for training the deep neural network to obtain a confidence model in advance according to historical lane line detection result data and lane line real data; the confidence coefficient model is used for representing the corresponding relation between the lane line detection result data and the confidence coefficient.
20. The apparatus of claim 17, wherein the optimization unit performs optimization adjustment on the current lane line detection result data, and comprises:
expanding the lane lines in the current lane line detection result data;
adjusting the lane lines in the lane line template data according to the current sensing data, the prior knowledge and/or the preset constraint conditions to obtain lane line template data for the next lane line detection processing; wherein the a priori knowledge or predetermined constraints comprise physical metric parameters or data expressions regarding the road structure.
21. The apparatus of claim 20, wherein the optimizing unit expands the lane lines in the current lane line detection result data, and comprises:
copying and translating the edge lane lines in the lane line detection result data according to the lane line structure in the lane line detection result data;
under the condition that the lane line detection result data can include the copied and translated lane line, keeping the copied and translated lane line, and storing new lane line detection result data;
and when the copied and translated lane line cannot be included in the lane line detection result data, abandoning the copied and translated lane line.
22. The apparatus of claim 17, wherein the optimization unit is further configured to determine a preset lane line template data as the lane line template data for the next lane line detection process after discarding the current lane line detection result data.
23. The apparatus according to claim 14, wherein the determination unit is further configured to determine the current lane line detection result data as lane line template data for a next lane line detection process.
24. The apparatus according to claim 14, wherein the extracting unit extracts the lane line image data from the current frame image data, and comprises:
and extracting lane line image data from the current frame image data according to an object recognition method or semantic segmentation.
25. The apparatus according to claim 14, wherein the lane line template data and the lane line detection result data are 3D spatial data of an overhead angle.
26. The apparatus of claim 14, wherein the perception data further comprises at least one of: map data, LIDAR (LIDAR) data of a current driving environment;
the positioning data includes GPS positioning data and/or inertial navigation positioning data.
27. A lane marking detection apparatus, comprising a processor and at least one memory, the at least one memory having at least one machine executable instruction stored therein, the processor executing the at least one machine executable instruction to perform:
acquiring current perception data of a driving environment of a vehicle; the current perception data comprises current frame image data and current positioning data;
acquiring lane line template data; the lane line template data is lane line detection result data obtained by the previous frame of lane line detection processing;
extracting current lane line image data according to the perception data;
determining to obtain current lane line detection result data according to the lane line image data and the lane line template data; the current lane line detection result data comprises data expressing the relative position relationship between the vehicle and the lane line;
the processor executes at least one machine executable instruction to determine to obtain current lane line detection result data according to the lane line image data and the lane line template data, and the method comprises the following steps:
mapping the lane line template data to lane line image data, and fitting according to a mapping result to obtain current lane line detection result data;
the processor executes at least one machine executable instruction to map the lane line template data to the lane line image data, and obtains current lane line detection result data according to the mapping result in a fitting mode, and the method comprises the following steps:
inputting the lane line image data and the lane line template data into a predetermined loss function, the loss function outputting a cost value; wherein the loss function is a function expressing a positional relationship of a lane line between the lane line template data and the lane line image data, and the cost value is a distance between the lane line in the lane line template data and the lane line in the lane line image data;
iteratively modifying the position of the lane line in the lane line template data under the condition that the difference value of the two adjacent cost values is greater than a preset threshold value; and under the condition that the difference value of the two adjacent cost values is less than or equal to the preset threshold value, finishing the iterative processing and obtaining the current lane line detection result data.
28. The apparatus of claim 27, wherein the processor executing the at least one machine executable instruction performs iteratively modifying a position of the lane line in the lane line template data comprising:
and iteratively modifying the position of the lane line in the lane line template data by adopting a gradient descent algorithm.
29. The apparatus of claim 27, wherein the processor executing the at least one machine executable instruction further performs, prior to determining the current lane line detection result data based on the lane line image data and the lane line template data:
adjusting the lane lines in the lane line template data according to the current sensing data, the prior knowledge and/or the preset constraint conditions;
wherein the a priori knowledge or predetermined constraints comprise physical metric parameters or data expressions regarding the road structure.
30. The apparatus of claim 27, wherein the processor executing the at least one machine executable instruction further performs:
checking the current lane line detection result data;
under the condition of passing the inspection, optimizing and adjusting the current lane line detection result data to obtain lane line template data for next lane line detection processing; in the case of a failed verification, the current lane line detection result data is discarded.
31. The apparatus of claim 30, wherein the processor executing the at least one machine executable instruction performs the verification of the current lane marking detection result data comprising:
determining the confidence coefficient of the current lane line detection result data according to a confidence coefficient model obtained by pre-training;
under the condition that the obtained confidence coefficient is determined to accord with the preset detection condition, the detection is successful; in the case where it is determined that the obtained confidence does not meet the predetermined test condition, the test fails.
32. The apparatus of claim 31, wherein the processor executing the at least one machine executable instruction further performs pre-training to obtain the confidence model, comprising:
training a deep neural network to obtain a confidence model according to historical lane line detection result data and lane line real data in advance; the confidence coefficient model is used for representing the corresponding relation between the lane line detection result data and the confidence coefficient.
33. The apparatus of claim 30, wherein the processor executing the at least one machine executable instruction performs the optimization of the current lane marking data by:
expanding the lane lines in the current lane line detection result data;
adjusting the lane lines in the lane line template data according to the current sensing data, the prior knowledge and/or the preset constraint conditions to obtain lane line template data for the next lane line detection processing; wherein the a priori knowledge or predetermined constraints comprise physical metric parameters or data expressions regarding the road structure.
34. The apparatus of claim 33, wherein the processor executing the at least one machine executable instruction performs expanding lane lines in the current lane line detection result data, comprising:
copying and translating the edge lane lines in the lane line detection result data according to the lane line structure in the lane line detection result data;
under the condition that the lane line detection result data can include the copied and translated lane line, keeping the copied and translated lane line, and storing new lane line detection result data;
and when the copied and translated lane line cannot be included in the lane line detection result data, abandoning the copied and translated lane line.
35. The apparatus of claim 30, wherein the processor executes the at least one machine executable instruction to perform determining a predetermined lane line template data as the lane line template data for the next lane line detection process after discarding the current lane line detection result data.
36. The apparatus of claim 27, wherein the processor executing the at least one machine executable instruction further performs:
and determining the current lane line detection result data as lane line template data for next lane line detection processing.
37. The apparatus of claim 27, wherein the lane line template data and the lane line detection result data are 3D spatial data of an overhead angle.
38. The apparatus of claim 27, wherein the processor executing the at least one machine executable instruction performs extracting lane line image data from the current frame image data, comprising:
and extracting the lane line image data from the current frame image data according to an object recognition method or a semantic segmentation method.
39. The apparatus of claim 27, wherein the perception data further comprises at least one of: map data, LIDAR (LIDAR) data of a current driving environment;
the positioning data includes GPS positioning data and/or inertial navigation positioning data.
CN201810688772.7A 2017-08-22 2018-06-28 Lane line detection method and device Active CN109426800B (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US15/683,463 US10373003B2 (en) 2017-08-22 2017-08-22 Deep module and fitting module system and method for motion-based lane detection with multiple sensors
US15/683,494 US10482769B2 (en) 2017-08-22 2017-08-22 Post-processing module system and method for motioned-based lane detection with multiple sensors
USUS15/683,494 2017-08-22
USUS15/683,463 2017-08-22

Publications (2)

Publication Number Publication Date
CN109426800A CN109426800A (en) 2019-03-05
CN109426800B true CN109426800B (en) 2021-08-13

Family

ID=65514491

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810688772.7A Active CN109426800B (en) 2017-08-22 2018-06-28 Lane line detection method and device

Country Status (1)

Country Link
CN (1) CN109426800B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020220182A1 (en) * 2019-04-29 2020-11-05 深圳市大疆创新科技有限公司 Lane line detection method and apparatus, control device, and storage medium
CN110595490B (en) * 2019-09-24 2021-12-14 百度在线网络技术(北京)有限公司 Preprocessing method, device, equipment and medium for lane line perception data
CN111439259B (en) * 2020-03-23 2020-11-27 成都睿芯行科技有限公司 Agricultural garden scene lane deviation early warning control method and system based on end-to-end convolutional neural network
CN111898540A (en) * 2020-07-30 2020-11-06 平安科技(深圳)有限公司 Lane line detection method, lane line detection device, computer equipment and computer-readable storage medium
CN112180923A (en) * 2020-09-23 2021-01-05 深圳裹动智驾科技有限公司 Automatic driving method, intelligent control equipment and automatic driving vehicle
CN112699747A (en) * 2020-12-21 2021-04-23 北京百度网讯科技有限公司 Method and device for determining vehicle state, road side equipment and cloud control platform
CN113167885B (en) * 2021-03-03 2022-05-31 华为技术有限公司 Lane line detection method and lane line detection device
CN113175937B (en) * 2021-06-29 2021-09-28 天津天瞳威势电子科技有限公司 Method and device for evaluating lane line sensing result

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9286524B1 (en) * 2015-04-15 2016-03-15 Toyota Motor Engineering & Manufacturing North America, Inc. Multi-task deep convolutional neural networks for efficient and robust traffic lane detection
WO2016130719A2 (en) * 2015-02-10 2016-08-18 Amnon Shashua Sparse map for autonomous vehicle navigation
US9443320B1 (en) * 2015-05-18 2016-09-13 Xerox Corporation Multi-object tracking with generic object proposals
CN106611147A (en) * 2015-10-15 2017-05-03 腾讯科技(深圳)有限公司 Vehicle tracking method and device
CN106845385A (en) * 2017-01-17 2017-06-13 腾讯科技(上海)有限公司 The method and apparatus of video frequency object tracking

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9429943B2 (en) * 2012-03-05 2016-08-30 Florida A&M University Artificial intelligence valet systems and methods
US20150112765A1 (en) * 2013-10-22 2015-04-23 Linkedln Corporation Systems and methods for determining recruiting intent
CN104700072B (en) * 2015-02-06 2018-01-19 中国科学院合肥物质科学研究院 Recognition methods based on lane line historical frames
CN105046235B (en) * 2015-08-03 2018-09-07 百度在线网络技术(北京)有限公司 The identification modeling method and device of lane line, recognition methods and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016130719A2 (en) * 2015-02-10 2016-08-18 Amnon Shashua Sparse map for autonomous vehicle navigation
US9286524B1 (en) * 2015-04-15 2016-03-15 Toyota Motor Engineering & Manufacturing North America, Inc. Multi-task deep convolutional neural networks for efficient and robust traffic lane detection
US9443320B1 (en) * 2015-05-18 2016-09-13 Xerox Corporation Multi-object tracking with generic object proposals
CN106611147A (en) * 2015-10-15 2017-05-03 腾讯科技(深圳)有限公司 Vehicle tracking method and device
CN106845385A (en) * 2017-01-17 2017-06-13 腾讯科技(上海)有限公司 The method and apparatus of video frequency object tracking

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Deep Neural Network for Structural Prediction and Lane Detection in Traffic Scene;Jun Li et al.;《IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS》;20170331;第28卷(第3期);第690-703页 *
一种基于帧间关联的实时车道线检测算法;李超 等;《计算机科学》;20170228;第44卷(第2期);第318-321页 *

Also Published As

Publication number Publication date
CN109426800A (en) 2019-03-05

Similar Documents

Publication Publication Date Title
CN109426800B (en) Lane line detection method and device
US11176701B2 (en) Position estimation system and position estimation method
EP3607272B1 (en) Automated image labeling for vehicle based on maps
EP4145393B1 (en) Vehicle localization
KR102483649B1 (en) Vehicle localization method and vehicle localization apparatus
CN109767637B (en) Method and device for identifying and processing countdown signal lamp
CN111830953B (en) Vehicle self-positioning method, device and system
US20180045516A1 (en) Information processing device and vehicle position detecting method
US10718628B2 (en) Host vehicle position estimation device
JP4973736B2 (en) Road marking recognition device, road marking recognition method, and road marking recognition program
US7894632B2 (en) Apparatus and method of estimating center line of intersection
US11467001B2 (en) Adjustment value calculation method
CN110869867B (en) Method, apparatus and storage medium for verifying digital map of vehicle
JP2020135874A (en) Local sensing-based autonomous navigation, associated system and method
CN114543819B (en) Vehicle positioning method, device, electronic equipment and storage medium
KR20180067199A (en) Apparatus and method for recognizing object
JP2018048949A (en) Object recognition device
CN110766761A (en) Method, device, equipment and storage medium for camera calibration
JP7461399B2 (en) Method and device for assisting the running operation of a motor vehicle, and motor vehicle
US20220355818A1 (en) Method for a scene interpretation of an environment of a vehicle
US11908206B2 (en) Compensation for vertical road curvature in road geometry estimation
US11377125B2 (en) Vehicle rideshare localization and passenger identification for autonomous vehicles
KR102316818B1 (en) Method and apparatus of updating road network
JP7229111B2 (en) MAP UPDATE DATA GENERATION DEVICE AND MAP UPDATE DATA GENERATION METHOD
JP2021018823A (en) Method and system for improving detection capability of driving support system based on machine learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant