CN117980949A - Method and system for lane tracking of autonomous vehicles - Google Patents

Method and system for lane tracking of autonomous vehicles Download PDF

Info

Publication number
CN117980949A
CN117980949A CN202280064485.3A CN202280064485A CN117980949A CN 117980949 A CN117980949 A CN 117980949A CN 202280064485 A CN202280064485 A CN 202280064485A CN 117980949 A CN117980949 A CN 117980949A
Authority
CN
China
Prior art keywords
lane
clothoid
tracking system
points
measured
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280064485.3A
Other languages
Chinese (zh)
Inventor
波图拉朱·查瓦利
斯里·哈里·布帕拉·哈里巴赫塔
穆拉里克里希纳·斯里达尔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Continental Zhixing Germany Co ltd
Original Assignee
Continental Zhixing Germany Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Continental Zhixing Germany Co ltd filed Critical Continental Zhixing Germany Co ltd
Priority claimed from PCT/EP2022/076759 external-priority patent/WO2023046975A1/en
Publication of CN117980949A publication Critical patent/CN117980949A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/62Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The disclosed subject matter relates to the field of autonomous vehicles, which discloses methods and systems for training a lane tracking system and lane tracking for an autonomous vehicle. During the training phase, the lane tracking system receives ground truth values and measurements corresponding to lane boundary detection points and determines ground truth clothoid points and measurement clothoid points. Thereafter, coefficient values of the clothoid parameters are determined for the measured clothoid points to model the lane boundaries, and then kalman filter parameters of the coefficient values are determined using at least one neural network to track the lane boundaries. Further, the coefficient values of the clothoid parameters are updated using the kalman filter parameters, the measured clothoid points are reconstructed using these kalman filter parameters, and the lane boundaries are tracked using the clothoid points. Further, training error minimization is enhanced by deriving the training error from the L2 norms between clothoid points rather than from coefficients. The trained lane tracking system is then deployed for lane tracking in a dynamic environment.

Description

Method and system for lane tracking of autonomous vehicles
Technical Field
The present subject matter relates generally to the field of autonomous vehicles and, more particularly, but not exclusively, to a method and system for lane tracking of an autonomous vehicle.
Background
Today, the automotive industry has begun to evolve toward autonomous vehicles. As used in the present specification and claims, an autonomous vehicle is a vehicle that is capable of sensing the surrounding environment to move on a road with or without human intervention. The autonomous vehicle senses the environment by means of sensors configured in the autonomous vehicle, such as lasers, light detection and ranging (LIDAR), global Positioning System (GPS), computer vision, etc. Autonomous vehicles are highly dependent on lane detection and tracking on roads to navigate smoothly.
Existing lane detection and tracking techniques may use a kalman filter to track lane boundaries. In particular, a kalman filter may be used to predict lane parameters and smooth the output of a lane tracker that tracks lane boundaries. Typically, a Kalman filter is selected when tracking lane boundaries because the Kalman filter has the ability to estimate state vector dynamics, even in the presence of noisy measurements or noise processes. The main parameters that help to determine the Kalman filter are the process noise covariance matrix (Q) and the measurement noise covariance matrix (R). Existing lane detection and tracking techniques that rely on a kalman filter to track lane boundaries use predefined or fixed Q and R values to determine the kalman filter. In practice, Q and R are dynamically changing parameters based on the scene, the detector used for measurement, the type of procedure used for measurement and tracking, etc. However, the prior art fails to combine the dynamics of Q and R, but uses fixed or predefined values of Q and R, which affects the accuracy of lane tracking predictions based on the kalman filter. Inaccurate lane tracking may generate false steering commands and warning signals to the autonomous vehicle, which may compromise vehicle safety.
In addition, since the Q value and the R value are fixed in the related art, the related art lacks flexibility in combining state changes occurring with time, thereby limiting prediction to only a few types or a small range of lane structures.
Therefore, there is a need for a method that enables lane tracking using a kalman filter with increased accuracy and flexibility.
The information disclosed in the background of the disclosure is only for enhancement of understanding of the general background of the disclosure and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art that is known to a person skilled in the art.
Disclosure of Invention
A method of training a lane tracking system for an autonomous vehicle is disclosed herein. The method comprises the following steps: the lane tracking system receives ground truth values corresponding to a plurality of lane boundary detection points and measured values corresponding to the plurality of lane boundary detection points from a lane boundary detection system associated with the lane tracking system. Further, the method comprises: the coefficient values of the clothoid parameters are determined for ground truth clothoid points and measured clothoid points formed using the ground truth set and the measurement set, respectively, to model lane boundaries of the lane. The ground truth set includes a subset of consecutive lane boundary detection points and corresponding ground truth values, and the measurement set includes a subset of consecutive lane boundary detection points and corresponding measurement values. Thereafter, the method comprises: a kalman filter parameter is determined for coefficient values of the clothoid parameter determined for the measured clothoid points to track lane boundaries of the lane. At least one neural network is used to determine the kalman filter parameters. After determining the kalman filter parameters, the method includes updating coefficient values of the clothoid parameters determined for the measured clothoid points using the corresponding kalman filter parameters. Further, the method comprises: the measured clothoid points are reconstructed using the corresponding updated coefficient values of the clothoid parameters. Each reconstructed measured clothoid point enables the lane tracking system to track the lane boundaries of the autonomous vehicle. Finally, the method comprises: the training error is minimized in each cycle based on the difference between the reconstructed measured clothoid points and the corresponding ground truth set until the training error is below a predefined threshold.
Further, the present disclosure includes a lane tracking system for an autonomous vehicle. The lane tracking system includes a processor and a memory communicatively coupled to the processor. The memory stores processor instructions that, when executed, cause the processor to train the lane tracking system, wherein, for training, the processor is configured to receive ground truth values corresponding to a plurality of lane boundary detection points and measured values corresponding to the plurality of lane boundary detection points from a lane boundary detection system associated with the lane tracking system. Further, the processor determines coefficient values of the clothoid parameters for ground truth clothoid points and measured clothoid points formed using the ground truth set and the measurement set, respectively, to model lane boundaries of the lane. The ground truth set includes a subset of consecutive lane boundary detection points and corresponding ground truth values, and the measurement set includes a subset of consecutive lane boundary detection points and corresponding measurement values. Thereafter, the processor determines a Kalman filter parameter for coefficient values of the clothoid parameter determined for the measured clothoid points to track lane boundaries of the lane. At least one neural network is used to determine the kalman filter parameters. After determining the kalman filter parameters, the processor uses the corresponding kalman filter parameters to update coefficient values of the clothoid parameters determined for the measured clothoid points. Further, the processor reconstructs the measured clothoid points using the corresponding updated coefficient values of the clothoid parameters. Each reconstructed measured clothoid point enables the lane tracking system to track the lane boundaries of the autonomous vehicle. Finally, the processor minimizes a training error based on a difference between the reconstructed measured clothoid points and the corresponding ground truth set in each cycle until the training error is below a predefined threshold.
Further, the present disclosure discloses a method for lane tracking of an autonomous vehicle. The method includes the lane tracking system receiving measurements corresponding to a plurality of lane boundary detection points from a lane boundary detection system associated with the lane tracking system. Thereafter, the method comprises: coefficient values of a clothoid parameter of the measured clothoid points formed using the measurement set are determined to model lane boundaries of the lane. The measurement set comprises a subset of consecutive lane boundary detection points and corresponding measurement values. Subsequently, the method comprises: a kalman filter parameter is determined for coefficient values of the clothoid parameter determined for the measured clothoid points to track lane boundaries of the lane. At least one neural network is used to determine the kalman filter parameters. After determining the kalman filter, the method comprises: coefficient values of the clothoid parameters determined for the measured clothoid points are updated using the corresponding kalman filter parameters. Finally, the method includes reconstructing the measured clothoid points using the corresponding updated coefficient values of the clothoid parameters. Each reconstructed measured clothoid point enables the lane tracking system to track the lane boundaries of the autonomous vehicle.
Further, the present disclosure discloses a lane tracking system for an autonomous vehicle. The lane tracking system includes a processor and a memory communicatively coupled to the processor. The memory stores processor instructions that, when executed, cause the processor to receive measurements corresponding to a plurality of lane boundary detection points from a lane boundary detection system associated with a lane tracking system. Thereafter, the processor determines coefficient values of the clothoid parameters of the measured clothoid points formed using the measurement set to model lane boundaries of the lane. The measurement set comprises a subset of consecutive lane boundary detection points and corresponding measurement values. The processor then determines a Kalman filter parameter for coefficient values of the clothoid parameter determined for the measured clothoid points to track lane boundaries of the lane. At least one neural network is used to determine the kalman filter parameters. After determining the kalman filter, the processor uses the corresponding kalman filter parameters to update coefficient values of the clothoid parameters determined for the measured clothoid points. Finally, the processor reconstructs the measured clothoid points using the corresponding updated coefficient values of the clothoid parameters. Each reconstructed measured clothoid point enables the lane tracking system to track the lane boundaries of the autonomous vehicle.
The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and together with the description, serve to explain the principles of the disclosure. In the drawings, the leftmost digit(s) of a reference number identifies the drawing in which the reference number first appears. The same numbers are used throughout the drawings to reference like features and components. Some embodiments of systems and/or methods according to embodiments of the present subject matter will now be described, by way of example only, with reference to the accompanying drawings, in which:
FIG. 1 illustrates an exemplary architecture for performing lane tracking on an autonomous vehicle according to some embodiments of the present disclosure.
Fig. 2A illustrates a detailed block diagram of an exemplary lane tracking system for lane tracking of an autonomous vehicle according to some embodiments of the present disclosure.
Fig. 2B illustrates an exemplary plurality of lane boundary detection points positioned consecutively according to an exemplary ground truth value according to some embodiments of the present disclosure.
Fig. 2C illustrates an exemplary plurality of lane boundary detection points positioned consecutively according to exemplary measurements according to some embodiments of the present disclosure.
Fig. 2D illustrates an example lane tracked using example reconstructed clothoid points according to some embodiments of the disclosure.
Fig. 2E illustrates example lanes being tracked that belong to a roadway on which an autonomous vehicle is moving, according to some embodiments of the present disclosure.
Fig. 3A illustrates a flow chart illustrating a method of training a lane tracking system for an autonomous vehicle, in accordance with some embodiments of the present disclosure.
Fig. 3B illustrates a flowchart showing a method for lane tracking of an autonomous vehicle, according to some embodiments of the present disclosure.
FIG. 4 is a block diagram of an exemplary computer system for implementing embodiments consistent with the present disclosure.
It will be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative systems embodying the principles of the subject matter. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudocode, and the like represent various processes which may be substantially represented in computer readable media and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
Detailed Description
In this document, the word "exemplary" is used herein to mean "serving as an example, instance, or illustration. Any embodiment or implementation of the subject matter described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
While the disclosure is susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that there is no intent to limit the disclosure to the forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternatives falling within the scope of the disclosure.
The terms "comprises," "comprising," "includes," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a device, apparatus, or method that comprises a list of elements or steps does not include only those elements or steps, but may include other elements or steps not expressly listed or inherent to such device, apparatus, or method. In other words, without further constraints, a preceding element or elements of a system or apparatus "comprising … … (comprising … a)" does not exclude the presence of other elements or additional elements in the system or method.
A method and system for lane tracking of an autonomous vehicle is disclosed herein. In some embodiments, lane tracking includes detecting whether one or more lanes are present on a road on which the autonomous vehicle is moving, and tracking the detected lanes to generate one or more commands to the autonomous vehicle. As an example, the one or more commands may be steering commands, braking commands, lane change commands, overtaking commands, warning signals, etc. that facilitate movement of the autonomous vehicle. The present disclosure provides an Artificial Intelligence (AI) -based method of performing lane tracking of an autonomous vehicle in a manner that solves one or more of the problems of the prior art set forth in the background section of the present disclosure. Since this approach is an AI-based approach, the lane tracking system disclosed in this disclosure for tracking lanes for autonomous vehicles requires training prior to deployment in a dynamic environment.
In some embodiments, the lane tracking system may be trained using ground truth values corresponding to a plurality of lane boundary detection points and measurements corresponding to a plurality of lane boundary detection points, the measurements determined using image frames corresponding to various road scenes and different types of lanes. In some embodiments, the ground truth value may refer to an original value, or in other words, information that is known to be authentic and provided based on direct observation. However, the measurement is a value determined or predicted by the system, not based on direct observation as the ground truth value. Thus, the measured value and the ground truth value may be the same or different depending on the accuracy of the measured value. In some embodiments, a subset of the measurements may be used to form a measurement clothoid point, and a subset of the ground truth values may be used to form a ground truth clothoid point. During the training phase, the lane tracking system may be trained to determine a kalman filter to update coefficient values of the clothoid parameters of the measured clothoid points, and then reconstruct the measured clothoid points using the updated coefficient values of the clothoid parameters. Thereafter, the lane tracking system may be trained to determine training errors and minimize the training errors for each cycle. In the following detailed description of the present disclosure, a training phase of a lane tracking system for tracking a lane of an autonomous vehicle based on clothoid points is detailed using appropriate figures.
During the training phase, the lane tracking system may use one or more neural networks to determine the kalman filter parameters. The one or more neural networks may include a neural network having a time memory, such as a Recurrent Neural Network (RNN) or a Long Short Term Memory (LSTM) neural network. One or more neural networks may be capable of storing memory associated with historical events and learning long-term dependencies based on the stored memory. In the present disclosure, the lane tracking system may be trained during a training phase to dynamically determine a measurement noise covariance matrix (R) and a process noise covariance matrix (Q) using one or more identical or different neural networks. For example, a first neural network may be used to dynamically determine the measurement noise covariance matrix (R) and a second neural network may be used to dynamically determine the process noise covariance matrix (Q). Each of the first and second neural networks may be a neural network having a time memory, such as an RNN or LSTM network. In some other embodiments, the measurement noise covariance matrix (R) and the process noise covariance matrix (Q) may be dynamically determined using the same neural network. In practice, Q and R dynamically change based on scene, detector used for measurement, type of procedure used for measurement and tracking, etc. Q and R are determined by using a neural network such that Q and R can be determined by the neural network based on historical data captured from past cycles of the autonomous vehicle. Thus, the neural network can determine Q and R by analyzing the change over time, and thus result in a joint evolution of Q and R. Thus, the Q and R values determined using the neural network(s) are not some random predefined or static values, but are specific to the current scene captured in the image frame. Since the Q and R values are determined using a data-driven neural network, i.e., analyzing based on historical data captured from past cycles and using the analysis results to determine Q and R for the current scene, the determined Q and R values are accurate and robust. Furthermore, since the ground truth values and measurements are used to train the neural network during the training phase, the determined values of Q and R are closer to the ground truth values, which increases the accuracy level of the dynamically determined Q and R values. Such dynamically determined accurate Q and R values enable the determination of an accurate kalman filter for lane tracking, which in turn yields accurately updated coefficient values of the clothoid parameters and reconstructed measured clothoid points.
In each cycle, a training error that may occur due to reconstructing the clothoid point may be determined based on the difference between the reconstructed clothoid point and the corresponding set of ground truths (i.e., the subset of ground truth values of the corresponding lane boundary detection points). Such training errors may be minimized until the training error is below a predefined threshold. Thus, by minimizing the training error during the training phase, even the slightest training error that may negatively impact lane tracking accuracy is reduced. In the present disclosure, the training error may be obtained by using the L2 norm between clothoid points rather than by using some clothoid coefficients used in the prior art. Thus, in the present disclosure, a new method of minimizing errors is followed, which includes comparing the reconstructed measured clothoid points with ground truth values of corresponding lane boundary detection points originally used to form the reconstructed measured clothoid points. This enhances error minimization in a better manner and with fewer cycles than conventional error minimization techniques that include comparing updated coefficient values of the clothoid parameters of the measured clothoid points to coefficient values of the clothoid parameters of the ground truth clothoid points.
In addition, because Q and R are essentially indicative of process noise and measurement noise, respectively, the determination of Q and R requires data from one or more sensors configured in the autonomous vehicle. The use of a neural network (such as an RNN or LSTM network) may provide the flexibility for the present disclosure to include sensor error models in the neural network. Such a sensor error model may provide low level features related to one or more sensors, such as the amount of noise involved in the measurement, etc. Such low level features help to directly correct sensor errors using a sensor error model and improve the accuracy of dynamically determined Q and R values. This in turn helps to perform lane tracking with the highest accuracy. Therefore, the lane tracking based on the clothoid parameters using the neural network and the kalman filter not only enables accurate lane tracking, but also reduces the generation of erroneous steering commands and warning signals and enhances the safety of the autonomous vehicle.
The description of an embodiment with several components in communication with each other does not imply that all such components are required. Rather, the various optional components are described to illustrate the wide variety of possible embodiments of the present disclosure.
In the following detailed description of embodiments of the present disclosure, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration specific embodiments in which the disclosure may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the disclosure, and it is to be understood that other embodiments may be utilized and that changes may be made without departing from the scope of the disclosure. The following description is, therefore, not to be taken in a limiting sense.
Fig. 1 illustrates an exemplary architecture for lane tracking of an autonomous vehicle according to some embodiments of the present disclosure.
Architecture 100 includes autonomous vehicle 101, lane tracking system 103 and lane boundary detection system 105, sensors 113 1 -113 n (also referred to as one or more sensors 113), and image capture device 115. As an example, the autonomous vehicle 101 may be a car, bus, truck, van, etc., integrated with an Electronic Control Unit (ECU) and system capable of communicating via the on-board network of the autonomous vehicle. In some embodiments, the lane boundary detection system 105 may be associated with the lane tracking system 103 via a communication network (not shown in fig. 1). The communication network may be at least one of a wired communication network and a wireless communication network. In some embodiments, both the lane boundary detection system 105 and the lane tracking system 103 may be configured in the vehicle 101 to perform lane tracking on the autonomous vehicle 101. In some other embodiments, both the lane boundary detection system 105 and the lane tracking system 103 may be externally associated with the ECU of the autonomous vehicle 101 to perform lane tracking on the autonomous vehicle 101. In other embodiments, one of these systems may be configured in the autonomous vehicle 101, and the other system may be externally associated with the ECU of the autonomous vehicle 101 to perform lane tracking for the autonomous vehicle 101.
In some embodiments, the autonomous vehicle 101 may be configured with one or more sensors 113 and image capture devices 115. The autonomous vehicle 101 may perceive the environment by means of one or more sensors 113, such as light detection and ranging (LIDAR), global Positioning System (GPS), computer vision, etc. Further, the image capturing device 115 may be mounted to the autonomous vehicle 101 to capture image frames of an area in front of the autonomous vehicle 101. In some embodiments, the image capture device 115 may include, but is not limited to, a Red Green Blue (RGB) camera, a monochrome camera, a depth camera, a 360 degree camera, a night vision camera, and the like. In some embodiments, the autonomous vehicle 101 may be installed with more than one image capture device 115. The image capture device(s) 115 may be installed in an area of the autonomous vehicle 101 such that the area in front of the autonomous vehicle 101 is appropriately covered by the image frame. For example, the image capture device(s) 115 may be mounted on top of the autonomous vehicle 101, a headlight area of the autonomous vehicle 101, an external rearview mirror, or the like.
In some embodiments, the lane tracking system 103 is an Artificial Intelligence (AI) based system that may be trained to perform lane tracking on the autonomous vehicle 101 prior to deploying the lane tracking system 103 for a dynamic environment while the autonomous vehicle 101 is navigating. In some embodiments, during the training phase, the lane tracking system 103 may receive ground truth values corresponding to a plurality of lane boundary detection points and measured values corresponding to a plurality of lane boundary detection points from the lane boundary detection system 105. In some embodiments, the ground truth value may refer to an original value, or in other words, information that is known to be authentic and provided based on direct observation. However, the measurement is a value determined or predicted by the system, not based on direct observation as the ground truth value. In some embodiments, the lane boundary may be a line marking the lane boundary. Each lane may have a left lane boundary and a right lane boundary surrounding the lane. In some embodiments, the plurality of lane boundary detection points may be points indicative of a boundary region of a road along which the autonomous vehicle 101 moves. In other words, the plurality of lane boundary detection points correspond to left and right lane boundaries of a plurality of lanes belonging to the road along which the autonomous vehicle 101 is moving. The subset of consecutive lane boundary detection points and the corresponding ground truth values may be referred to as a ground truth set, and the subset of consecutive lane boundary detection points and the corresponding measurement values may be referred to as a measurement set. Thereafter, the lane tracking system 103 may generate ground truth clothoid points using the ground truth set and generate measurement clothoid points using the measurement set. In some embodiments, during this training phase, the lane tracking system 103 may be trained to select the set of ground truth and the set of measurements required for generating the ground truth clothe points and the measured clothe points, respectively. The clothoid points are generally spiral curves with a curvature that varies linearly over an arc length that allows the steering wheel to move smoothly as the autonomous vehicle 101 is moving on road segments with different horizontal curvatures. The lane tracking system 103 may thereafter be trained to determine ground truth clothoid points and coefficient values of clothoid parameters that measure the clothoid points to model lane boundaries of the lane along which the autonomous vehicle 101 is to move. In some embodiments, the clothoid parameters may include, but are not limited to, an initial curvature of the lane boundary (c o), a rate of curvature of the lane boundary (c 1), and a heading angle (β) relative to the direction of travel of the autonomous vehicle. In some embodiments, the initial curvature (c o) of the lane boundary may be defined as a first angle of curvature of the lane determined in the image frame, the rate of curvature (c 1) of the lane boundary may be defined as a rate at which the lane curvature changes in the image frame when compared to the initial curvature, and the heading angle (β) may be defined as an angle of the lane relative to the curvature of the autonomous vehicle on the lane. Thereafter, the lane tracking system 103 may be trained to determine the kalman filter parameters for the coefficient values of the clothoid parameters determined for the measured clothoid points to track the lane boundaries of the lane. In some embodiments, the lane tracking system 103 may use at least one neural network, such as an RNN or long term memory (LSTM) network, to determine the kalman filter parameters. After determining the kalman filter parameters, the lane tracking system 103 may be trained to update the coefficient values of the clothoid parameters determined for the measured clothoid points using the corresponding kalman filter parameters. Thereafter, the lane tracking system 103 may be trained to reconstruct the measured clothoid points using the correspondingly updated coefficient values of the clothoid parameters. In some embodiments, each reconstructed measured clothoid point enables the lane tracking system 103 to track the lane boundaries of the autonomous vehicle 101. The lane tracking system 103 may then determine a training error by calculating the difference between the reconstructed clothoid points and the corresponding ground truth set. During the training phase, the lane tracking system 103 may minimize the training error determined in each cycle until the training error is below a predefined threshold.
The lane tracking system 103 so trained may be used in a dynamic environment when the autonomous vehicle 101 is moving on a road. In some embodiments, the lane tracking system 103 may include a processor 107, an input/output (I/O) interface 109, and a memory 111, as shown in fig. 1. The I/O interface 109 of the lane tracking system 103 may receive measurements corresponding to a plurality of lane boundary detection points from the lane boundary detection system 105. The plurality of lane boundary detection points correspond to left and right lane boundaries of a lane along which the autonomous vehicle 101 is moving. The processor 107 may generate a measurement clothoid point based on the measurement set including a subset of consecutive lane boundary detection points and corresponding measurement values. In some embodiments, the processor 107 may dynamically select a subset of consecutive lane boundary detection points to form a measurement set. In some embodiments, the processor 107 may determine coefficient values of the clothoid parameters that measure the clothoid points to model the lane boundaries of the lane. Thereafter, the processor 107 may determine a kalman filter parameter for the coefficient value of the clothoid parameter determined for the measured clothoid point to track the lane boundary of the lane, and update the coefficient value of the clothoid parameter determined for the measured clothoid point using the kalman filter parameter. Finally, the processor 107 may reconstruct the measured clothoid points using the corresponding updated coefficient values of the clothoid parameters. In some embodiments, each reconstructed measured clothoid point may enable the lane tracking system 103 to track the lane boundaries of the autonomous vehicle 101.
Fig. 2A illustrates a detailed block diagram of a lane tracking system 103 for an autonomous vehicle according to some embodiments of the present disclosure.
In some implementations, the lane tracking system 103 may include data 203 and a module 205. As an example, the data 203 is stored in the memory 111 of the lane tracking system 103, as shown in fig. 2A. In one embodiment, the data 203 may include training data 207, clothoid point data 209, kalman filter parameter data 211, reconstruction data 213, and other data 215. In the illustrated fig. 2A, the module 205 is described in detail herein.
In some embodiments, data 203 may be stored in memory 111 in the form of various data structures. In addition, data 203 may be organized using a data model (such as a relational data model or a hierarchical data model). Other data 215 may store data generated by module 205 for performing various functions of lane tracking system 103, including temporary data and temporary files.
In some embodiments, the training data 207 may include data for training the lane tracking system 103 for lane tracking of the autonomous vehicle 101. For example, the training data 207 may include, but is not limited to, ground truth values corresponding to a plurality of lane boundary detection points and measured values corresponding to a plurality of lane boundary detection points for training the lane tracking system 103, ground truth clothoid points and measured clothoid points generated using ground truth sets and measurement sets, respectively, coefficient values of kalman filter parameters and clothoid parameters determined for the measured clothoid points, updated coefficient values of the clothoid parameters, reconstructed measured clothoid points, and training errors.
In some embodiments, clothoid point data 209 may include data related to clothoid points generated in a dynamic environment when autonomous vehicle 101 is moving on a road. As an example, the clothoid point data 209 may include, but is not limited to, a measured clothoid point, a set of measurements (including a plurality of subsets of consecutive lane boundary detection points and corresponding measurements) for generating the measured clothoid point, and coefficient values of clothoid parameters determined for the measured clothoid point.
In some embodiments, the Kalman filter parameter data 211 may include, but is not limited to, kalman filter parameters determined for coefficient values of a clothoid parameter determined for a measured clothoid point using at least one neural network, which may include an RNN or Long Short Term Memory (LSTM) network.
In some embodiments, the at least one neural network may comprise a neural network having a time memory. The neural network with time memory may be a special type of Recurrent Neural Network (RNN) capable of learning long-term dependencies. At least one neural network may comprise an LSTM that is specifically designed to avoid long-term dependency problems. All RNNs are in the form of chains of neural network repeating modules. In a standard RNN, this repetition module will have a very simple structure, such as a single hyperbolic tangent layer. LSTM also has this chain structure, but the repeat modules of LSTM have a different structure compared to a general RNN. Instead of a single neural network layer, there are four neural networks that interact in a particular way. LSTM has the ability to remove or add information to the cell state, which is carefully regulated by a structure called a gate. A door is one way to optionally pass information. For example, LSTM has three such gates to protect and control cell states: (a) Input gate-deciding which new information is to be stored in the cell state; (b) Forget gate-decide which information to discard from the cell state; and (c) output gate-deciding which information is to be output.
In some embodiments, the reconstruction data 213 may include, but is not limited to, reconstructed measured clothoid points and updated coefficient values of clothoid parameters determined for the measured clothoid points, which are used to reconstruct the measured clothoid points.
In some embodiments, the data 203 stored in the memory 111 may be processed by a module 205 of the lane tracking system 103. The module 205 may be stored within the memory 111. In an example, the module 205 communicatively coupled to the processor 107 of the lane tracking system 103 may also reside external to the memory 111 (as shown in fig. 2A) and be implemented as hardware. As used herein, the term "module 205" may refer to an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.
In some embodiments, the module 205 may include, for example, a receiving module 221, a coefficient value determination module 223, a kalman filter determination module 225, a reconstruction module 227, a learning module 229, and other modules 231. Other modules 231 may be used to perform various functions of the lane tracking system 103. It should be appreciated that the above-described modules 205 may be represented as a single module or as a combination of different modules.
The lane tracking system 103 may be trained to perform lane tracking on the autonomous vehicle 101 before the lane tracking system 103 is deployed for a dynamic environment while the autonomous vehicle 101 is navigating.
In some embodiments, during the training phase, the receiving module 221 may receive ground truth values corresponding to a plurality of lane boundary detection points and measured values corresponding to a plurality of lane boundary detection points from the lane boundary detection system 105 associated with the lane tracking system 103. In some embodiments, the plurality of lane boundary detection points correspond to left and right lane boundaries of a plurality of lanes belonging to a road along which the autonomous vehicle 101 is moving. In some embodiments, the lane boundary detection system 105 may determine a plurality of lane boundary detection points using at least one of lane data received from one or more sensors 113 configured in the autonomous vehicle 101 and image frames of the lane received in real-time. In some embodiments, an image frame of a lane is received from an image capture device 115 associated with the autonomous vehicle 101. In some other embodiments, at least one of the lane data and the image frames of the lane is taken from a database configured to store the lane data and the image frames captured in real time. As examples, lane data may include, but is not limited to, lane markings, lane patterns, lane colors, number of lanes, and the like. Fig. 2B shows an exemplary diagram of an exemplary plurality of lane boundary detection points 233 that are continuously located according to an exemplary ground truth value. Since the ground truth value corresponding to the exemplary plurality of lane boundary detection points 233 is an original value, the line indicated with white on the lane in fig. 2B is a smooth line that is appropriately superimposed on the lane of the road. Fig. 2C shows an exemplary diagram of an exemplary plurality of lane boundary detection points 235 that are continuously located according to an exemplary measurement. Since the measurements corresponding to the exemplary plurality of lane boundary detection points 235 are measured by the lane tracking system 103 being trained, the line indicated with white on the lane in fig. 2B is not smooth and is also not superimposed on the lane of the road.
Further, in some embodiments, during the training phase, the coefficient value determination module 223 may select a subset of consecutive lane boundary detection points and corresponding ground truth values as the ground truth set and a subset of consecutive lane boundary detection points and corresponding measured values as the measurement set. The coefficient value determination module 223 may generate ground truth clothoid points using the ground truth set and generate measurement clothoid points using the measurement set. In some embodiments, a predefined number of consecutive lane boundary detection points are typically used to generate clothoid points. Thus, as the autonomous vehicle 101 is moving along the lane of the road, the ground truth set and the measurement set will be consecutively selected one by one to generate corresponding ground truth clothoid points and measurement clothoid points. An exemplary ground truth set of "N" lane boundary detection points including ground truth values may be as follows:
[(x0,y0),(x1,y1),(x2,y2),……,(xn-1,yn-1)]
Similarly, an exemplary ground truth set of "M" lane boundary detection points including measurements may be as follows:
[(x0,y0),(x1,y1),(x2,y2),……,(xm-1,ym-1)]
Thereafter, during the training phase, the coefficient value determination module 223 may be trained to determine the coefficient values of the ground truth clothoid points and clothoid parameters measuring the clothoid points to model the lane boundaries of the lane along which the autonomous vehicle 101 is to move. In some embodiments, the clothoid parameters may include, but are not limited to, an initial curvature of the lane boundary (c o), a rate of curvature of the lane boundary (c 1), and a heading angle (β) relative to the direction of travel of the autonomous vehicle. In some embodiments, the initial curvature (c o) of the lane boundary may be defined as a first angle of curvature of the lane determined in the image frame, the rate of curvature (c 1) of the lane boundary may be defined as a rate at which the lane curvature changes in the image frame when compared to the initial curvature, and the heading angle (β) may be defined as an angle at which the autonomous vehicle is expected to move forward relative to the curvature of the lane.
In some embodiments, because the clothoid points cannot be evaluated in a closed form, the coefficient value determination module 223 may be trained to determine ground truth clothoid points and measure clothoid points using equation 1 below.
In the above equation 1, the number of the blocks,
-X and y refer to lane boundary detection points of one of a ground truth set or a measurement set;
-c o refers to the initial curvature of the lane boundary;
-c 1 refers to the rate of curvature of the lane boundary;
- β refers to the heading angle relative to the direction of travel of the autonomous vehicle; and
X Offset amount refers to the initial lateral offset between the lane boundary and the autonomous vehicle 101 (host vehicle).
In some embodiments, the coefficient value determination module 223 may be trained to determine coefficient values for the clothoid parameters of the ground truth clothoid points using equation 2 below.
In equation 2 above, a is:
The incremental value determination of the lane detection boundary points for the ground truth set is as follows:
Δx0=x1-x0 Δy0=y1-y0
Δx1=x2-x1 Δy1=y2-y1
c o refers to the initial curvature of the lane boundary;
c 1 refers to the rate of curvature of the lane boundary; and
Beta refers to the heading angle relative to the direction of travel of the autonomous vehicle.
Using equation 2 above, the coefficient value determination module 223 may also determine coefficient values of the clothoid parameters that measure clothoid points. In some embodiments, the ground truth clothoid points and coefficient values of the clothoid parameters that measure the clothoid points represent the state of the lane boundary along which the autonomous vehicle 101 is moving during the training phase.
Thereafter, the kalman filter determination module 225 may be trained to determine the kalman filter parameters for the coefficient values of the clothoid parameters determined for the measured clothoid points to track the lane boundaries of the lane. In some embodiments, the Kalman filter determination module 225 may use at least one neural network, such as an RNN or long term memory (LSTM) network, to determine the Kalman filter parameters. The kalman filter determination module 225 may use one or more neural networks to determine the kalman filter parameters. In some embodiments, the kalman filter determination module 225 may initially provide the coefficient values of the clothoid parameters determined for the measured clothoid points as input to the first neural network. In some embodiments, the first neural network may be trained based on historical coefficient values of clothoid parameters determined for clothoid points formed using a historical measurement set and a ground truth set. Thereafter, the kalman filter determination module 225 may determine the measurement noise covariance matrix (R) using the first neural network using coefficient values of the clothoid parameters determined for the measurement clothoid points. The first neural network may be capable of learning long-term dependencies. The first neural network may comprise an RNN or LSTM network. After determining "R", the kalman filter determination module 225 may predict a state transition of the coefficient value of the clothoid parameter determined for the measured clothoid point from one image frame to another image frame based on the speed of the autonomous vehicle 101 moving along the lane and the time difference between the consecutive image frames (Y p). In some embodiments, the state transitions may be predicted using a matrix as shown below:
/>
in the matrix described above, the matrix is defined,
"V" refers to the speed of the autonomous vehicle; and
"Δt" refers to the time difference between successive image frames.
Thereafter, the Kalman filter determination module 225 may use the predicted state transitions to determine a process noise covariance matrix (Q) as an input to the second neural network. In some embodiments, the second neural network is also trained using historical own vehicle speed values and time differences. The second neural network may be capable of learning long-term dependencies. The second neural network may comprise an RNN or LSTM network. Using the determined process noise covariance matrix (Q), the kalman filter determination module 225 may predict the error covariance of the predicted state transition (P p). Finally, the kalman filter determination module 225 may determine the kalman filter parameters for the coefficient values of the clothoid parameters determined for the measured clothoid points based on the predicted state transitions (Y p) and covariance (P p), the determined measured noise covariance matrix (R), and the coefficient values of the clothoid parameters determined for the measured clothoid points. In some embodiments, in a broad sense, the kalman filter may include, but is not limited to, a state vector (y), a state transition matrix (f), a state error covariance matrix (P), a process noise covariance matrix (Q), a kalman gain (K), a measurement noise covariance matrix (R), and a measurement value (z) at time (t) (also referred to as a measurement value corresponding to multiple lane boundary detection points). As discussed above, the kalman filter determination module 225 may learn Q and R using the first and second neural networks, respectively. Equations 4 to 8 given below indicate the determination of the remaining kalman filter parameters. Equations 4 and 5 relate to the prediction of the Kalman filter parameters, and equations 6-8 relate to the updating of the Kalman filter parameters.
In the following equation 4 of the present invention,A state vector representing a prediction at time "t", and/>Is a state transition matrix.
In the following equation 5 of the present invention,State error covariance matrix representing predictions at time "t"/>Representing a state error covariance matrix determined at time (t-1), the matrix representation of "F" being "F", and "Q" representing a process noise covariance matrix (Q).
In the following equation 6 of the present invention,A state error covariance matrix representing the prediction at time "t," R, "a measurement noise covariance matrix," K t, "a kalman gain at time" t.
In the following equation 7,State vector representing update at time "t"/>Representing the predicted state vector at time "t," K t "represents the kalman gain at time" t, "and" z "is a measurement value corresponding to a plurality of lane boundary detection points at time" t.
In the following equation 8 of the present invention,State error covariance matrix representing predictions at time "t"/>Representing the updated state error covariance matrix at time "t," K t "represents the kalman gain at time" t, "and" I "represents the identity matrix.
After determining the kalman filter parameters, the reconstruction module 227 may be trained to update the coefficient values of the clothoid parameters determined for the measured clothoid points using the corresponding kalman filter parameters. As an example, the coefficient values of the clothoid parameters based on the kalman filter parameter update may be as follows:
In the above examples, KF may refer to a kalman filter parameter, and a clothoid parameter suffixed with "KF" indicates that the determined kalman filter parameter is applied to the clothoid parameter to update the coefficient value of the clothoid parameter.
In some embodiments, in updating the coefficient values of the clothoid parameters determined for the measured clothoid points, the reconstruction module 227 may be trained to reconstruct the measured clothoid points using the corresponding updated coefficient values of the clothoid parameters. In some embodiments, each reconstructed measured clothoid point may enable the lane tracking system 103 to track the lane boundaries of the autonomous vehicle 101. In some embodiments, the reconstruction module 227 may add an initial lateral offset between the lane boundary and the host vehicle to the reconstructed measured clothoid points. As an example, an exemplary tracking and modeling lane 237 formed based on reconstructed clothoid points is shown in fig. 2D. In another example, fig. 2E shows a plurality of example lanes being tracked that belong to a road on which the autonomous vehicle 101 is moving. In fig. 2E, the line indicated by reference numeral 235 represents initial measurements corresponding to an exemplary plurality of lane boundary detection points 235, and the line indicated by reference numeral 237 represents an exemplary tracking and modeling lane 237 formed based on reconstructed measured clothoid points. Thus, in the present disclosure, the lane tracking system 103 tracks not only the own lane (i.e., the left and right boundary lanes of the lane in which the autonomous vehicle 101 is moving), but also other lanes of the road, as shown in fig. 2E.
Further, the learning module 229 may determine the training error by calculating the difference between the reconstructed clothoid points and the corresponding ground truth set. By calculating the difference between the reconstructed clothoid points and the corresponding ground truth set, the learning module 229 will be able to enhance error minimization, thereby accurately determining the reconstructed measured clothoid points for performing lane tracking on the autonomous vehicle 101 when deployed in a dynamic environment. During the training phase, the learning module 229 may minimize the training error determined in each cycle until the training error is below a predefined threshold.
In some embodiments, the lane tracking system 103 so trained may be used in a dynamic environment when the autonomous vehicle 101 is moving on a road.
In some embodiments, in a dynamic environment, the receiving module 221 may receive measurements corresponding to a plurality of lane boundary detection points from the lane boundary detection system 105. The plurality of lane boundary detection points correspond to left and right lane boundaries of a lane along which the autonomous vehicle 101 is currently moving.
Thereafter, the coefficient value determination module 223 may generate a measurement clothoid point based on the measurement set including a subset of consecutive lane boundary detection points and corresponding measurement values. In some embodiments, the coefficient value determination module 223 may dynamically select a subset of consecutive lane boundary detection points to form the measurement set when the autonomous vehicle 101 is moving on the road. The coefficient value determination module 223 may then determine coefficient values of the clothoid parameters of the measured clothoid points to model lane boundaries of the lane.
Further, in some embodiments, the kalman filter determination module 225 may determine a kalman filter parameter for coefficient values of the clothoid parameter determined for the measured clothoid points to track lane boundaries of the lane. In some embodiments, the kalman filter determination module 225 may determine the kalman filter parameters for the coefficient values of the clothoid parameters determined for the measured clothoid points based on the predicted state transitions (Y p) and covariance (P p), the determined measured noise covariance matrix (R), and the coefficient values of the clothoid parameters determined for the measured clothoid points.
Thereafter, the reconstruction module 227 may use the kalman filter parameters to update the coefficient values of the clothoid parameters determined for the measured clothoid points. Finally, the reconstruction module 227 may reconstruct the measured clothoid points using the corresponding updated coefficient values of the clothoid parameters. In some embodiments, each reconstructed measured clothoid point may enable the lane tracking system 103 to track the lane boundaries of the autonomous vehicle 101.
Fig. 3A illustrates a flow chart illustrating a method of training a lane tracking system for an autonomous vehicle, in accordance with some embodiments of the present disclosure.
As illustrated in fig. 3A, the method 300a includes one or more blocks that illustrate a method of training a lane tracking system for the autonomous vehicle 101. The method 300a may be described in the general context of computer-executable instructions. Generally, computer-executable instructions may include routines, programs, objects, components, data structures, procedures, modules, and functions that perform functions or implement abstract data types.
The order in which the method 300a is described is not intended to be construed as a limitation, and any number of the described method blocks may be combined in any order to implement the method 300a. In addition, individual blocks may be deleted from the method without departing from the spirit and scope of the subject matter described herein. Furthermore, method 300a may be implemented in any suitable hardware, software, firmware, or combination thereof.
At block 301, the method 300a may include: during the training phase, the processor 107 of the lane tracking system 103 receives ground truth values corresponding to a plurality of lane boundary detection points and measured values corresponding to the plurality of lane boundary detection points from the lane boundary detection system 105 associated with the lane tracking system 103. In some embodiments, the plurality of lane boundary detection points are determined using at least one of the image frames of the lane data and the real-time received lanes received from the one or more sensors 113 configured in the vehicle 101. In some embodiments, an image frame of a lane is received from an image capture device 115 associated with the autonomous vehicle 101. In some other embodiments, at least one of the lane data and the image frames of the lane is taken from a database configured to store the lane data and the image frames captured in real time. The database may be associated with the lane tracking system 103.
At block 303, method 300a may include: during the training phase, the processor 107 determines coefficient values of the clothoid parameters for the ground truth clothoid points and the measured clothoid points formed using the ground truth set and the measurement set, respectively, to model lane boundaries of the lane. In some embodiments, the processor 107 may select a subset of consecutive lane boundary detection points and corresponding ground truth values to form a ground truth set and select a subset of consecutive lane boundary detection points and corresponding measurement values to form a measurement set. In some embodiments, the number of consecutive lane boundary detection points selected to form the ground truth set and the measurement set may be predefined. However, this selection is performed in real time by the processor 107. In some other embodiments, the number of consecutive lane boundary detection points selected to form the ground truth set and the measurement set may be determined by the processor 107 according to each requirement of each image frame. In some embodiments, clothoid parameters may include, but are not limited to, an initial curvature of the lane boundary (c o), a rate of curvature of the lane boundary (c 1), and a heading angle (β) relative to the direction of travel of the vehicle.
At block 305, method 300a may include: during the training phase, the processor 107 determines a kalman filter parameter for coefficient values of the clothoid parameter determined for the measured clothoid points to track lane boundaries of the lane. In some embodiments, the processor 107 may use a neural network, such as an RNN or long term memory (LSTM) network, to determine the kalman filter parameters.
At block 307, the method 300a may include: during the training phase, the processor 107 uses the corresponding kalman filter parameters to update the coefficient values of the clothoid parameters determined for the measured clothoid points.
At block 309, method 300a includes: during the training phase, the processor 107 reconstructs the measured clothoid points using the corresponding updated coefficient values of the clothoid parameters. In some embodiments, each reconstructed measured clothoid point enables the lane tracking system 103 to track the lane boundaries of the autonomous vehicle 101.
At block 311, the method 300a includes: during the training phase, the processor 107 minimizes the training error in each cycle based on the difference between the reconstructed measured clothoid points and the corresponding ground truth set until the training error is below a predefined threshold. In this way, for each cycle, the processor 107 reduces errors associated with tracking the lane using the lane tracking system 103 being trained. In some embodiments, the processor 107 may add an initial lateral offset between the lane boundary and the autonomous vehicle 101 to the reconstructed measured clothoid point.
Fig. 3B illustrates a flow chart showing a method for lane tracking of an autonomous vehicle according to some embodiments of the present disclosure.
As illustrated in fig. 3B, method 300B includes one or more blocks that illustrate a method for lane tracking of autonomous vehicle 101. Method 300b may be described in the general context of computer-executable instructions. Generally, computer-executable instructions may include routines, programs, objects, components, data structures, procedures, modules, and functions that perform functions or implement abstract data types.
The order in which the method 300b is described is not intended to be construed as a limitation, and any number of the described method blocks may be combined in any order to implement the method 300b. In addition, individual blocks may be deleted from the method without departing from the spirit and scope of the subject matter described herein. Furthermore, method 300b may be implemented in any suitable hardware, software, firmware, or combination thereof.
At block 313, the method 300b may include: the processor 107 of the lane tracking system 103 in the dynamic environment receives measurements corresponding to a plurality of lane boundary detection points from the lane boundary detection system 105 associated with the lane tracking system 103 while the autonomous vehicle 101 is moving on the road. In some embodiments, the plurality of lane boundary detection points are determined using at least one of the image frames of the lane data and the real-time received lanes received from the one or more sensors 113 configured in the autonomous vehicle 101. In some embodiments, the processor 107 may receive an image frame of the lane from an image capture device 115 associated with the autonomous vehicle 101. In some other embodiments, the processor 107 may retrieve at least one of the lane data and the image frames of the lane from a database configured to store the lane data and the image frames captured in real-time. The database may be associated with the lane tracking system 103.
At block 315, method 300b may include: the processor 107 in the dynamic environment determines coefficient values of the clothoid parameters of the measured clothoid points formed using the measurement set, respectively, to model the lane boundaries of the lane. In some embodiments, the processor 107 may select a subset of consecutive lane boundary detection points and corresponding measurements to form a measurement set. In some embodiments, the number of consecutive lane boundary detection points selected to form the measurement set may be predefined. However, this selection is performed in real time by the processor 107. In some other embodiments, the number of consecutive lane boundary detection points selected to form the measurement set may be determined by the processor 107 based on the requirements of each image frame. In some embodiments, clothoid parameters may include, but are not limited to, an initial curvature of the lane boundary (c o), a rate of curvature of the lane boundary (c 1), and a heading angle (β) relative to the direction of travel of the vehicle.
At block 317, the method 300b may include: the processor 107 in the dynamic environment determines a kalman filter parameter for coefficient values of the clothoid parameter determined for the measured clothoid points to track lane boundaries of the lane. In some embodiments, the processor 107 may determine the kalman filter parameters using a neural network (e.g., RNN or long term memory (LSTM) network) capable of learning long term dependencies.
At block 319, method 300b may include: the processor 107 in the dynamic environment uses the corresponding kalman filter parameters to update the coefficient values of the clothoid parameters determined for the measured clothoid points.
At block 321, method 300b includes: the processor 107 in the dynamic environment reconstructs the measured clothoid points using the corresponding updated coefficient values of the clothoid parameters. In some embodiments, each reconstructed measured clothoid point enables the lane tracking system 103 to track the lane boundaries of the autonomous vehicle 101.
FIG. 4 is a block diagram of an exemplary computer system for implementing embodiments consistent with the present disclosure.
In some embodiments, FIG. 4 illustrates a block diagram of an exemplary computer system 400 for implementing embodiments consistent with the invention. In some embodiments, the computer system 400 may be the lane tracking system 103 for lane tracking of an autonomous vehicle, as shown in fig. 4. Computer system 400 may include a central processing unit ("CPU" or "processor") 402. Processor 402 may include at least one data processor for executing program components for performing user or system generated business processes. The user may comprise a person, a person using a device such as the one comprised in the invention or such a device itself. The processor 402 may include special purpose processing units such as an integrated system (bus) controller, memory management control unit, floating point unit, graphics processing unit, digital signal processing unit, and the like.
The processor 402 may be arranged to communicate with the input device 411 and the output device 412 via the I/O interface 401. The I/O interface 401 may employ communication protocols/methods such as, but not limited to: audio interfaces, analog interfaces, digital interfaces, stereo interfaces, IEEE-1394 interfaces, serial bus interfaces, universal Serial Bus (USB) interfaces, infrared interfaces, PS/2 interfaces, BNC interfaces, coaxial interfaces, component interfaces, composite interfaces, digital Video Interfaces (DVI), high Definition Multimedia Interfaces (HDMI), radio Frequency (RF) antennas, S-Video, video Graphics Arrays (VGA), IEEE 802.N/b/g/n/x, bluetooth, cellular (e.g., code Division Multiple Access (CDMA), high speed packet access (hspa+), global system for mobile communications (GSM), long Term Evolution (LTE), wiMax, etc.), and the like.
Using I/O interface 401, computer system 400 may communicate with input devices 411 and output devices 412.
In some embodiments, the processor 402 may be configured to communicate with a communication network 409 via a network interface 403. The network interface 403 may communicate with a communication network 409. The network interface 403 may employ connection protocols including, but not limited to, direct connection, ethernet (e.g., twisted pair 10/100/1000Base T), transmission control protocol/Internet protocol (TCP/IP), token ring, IEEE 802.11a/b/g/n/x, and the like. Using the network interface 403 and the communication network 409, the computer system 400 may communicate with the lane boundary detection system 105, the one or more sensors 113, and the image capture device 115. In some embodiments, the lane tracking system 103 may also be associated with a database (not shown in fig. 4). The communication network 409 may be implemented as one of different types of networks, such as an intranet or Local Area Network (LAN), a Closed Area Network (CAN), etc. within an autonomous vehicle. Communication network 409 may be a private network or a shared network that represents an association of different types of networks that communicate with each other using various protocols (e.g., hypertext transfer protocol (HTTP), CAN protocol, transmission control protocol/internet protocol (TCP/IP), wireless Application Protocol (WAP), etc.). Further, communications network 409 may include a variety of network devices including routers, bridges, servers, computing devices, storage devices, and the like. The one or more sensors 113 may include, but are not limited to, a light detection and ranging (LIDAR) system, a Global Positioning System (GPS), a laser, and the like. In some embodiments, the processor 402 may be configured to communicate with a memory 405 (e.g., RAM, ROM, etc., not shown in fig. 4) via a storage interface 404. The storage interface 404 may be coupled to the memory 405 using a connection protocol such as Serial Advanced Technology Attachment (SATA), integrated Drive Electronics (IDE), IEEE-1394, universal Serial Bus (USB), fibre channel, small Computer System Interface (SCSI), etc., including but not limited to a memory drive, removable disk drive, etc. The memory drives may further include magnetic drums, magnetic disk drives, magneto-optical drives, optical disk drives, redundant Arrays of Independent Disks (RAID), solid state memory devices, solid state drives, and the like.
Memory 405 may store a collection of programs or database components including, but not limited to, a user interface 406, an operating system 407, a web browser 408, and the like. In some embodiments, computer system 400 may store user/application data, such as data, variables, records, and the like, as described herein. Such a database may be implemented as a fault tolerant, relational, scalable, secure database, such as Oracle or Sybase.
The operating system 407 may facilitate resource management and operation of the computer system 400. Examples of operating system 407 include, but are not limited toUNIX-like System distribution (e.g., BERKELEY SOFTWARE/>(BSD)、/>OPENBSD et al)/>Distribution (e.g., RED/>Etc.), a,(/>7/8, 10, Etc.),GOOGLETM、ANDROIDTM、/>OS, etc. The user interface 406 may facilitate display, execution, interaction, manipulation, or operation of the program component by a text or graphical tool. For example, the user interface 406 may provide computer interactive interface elements, such as cursors, icons, check boxes, menus, scroll bars, windows, widgets, and the like, on a display system operatively connected to the computer system 400. A Graphical User Interface (GUI) may be employed including, but not limited to, applesOperating System/> (E.g., aero, metro, etc.), web interface libraries (e.g./>)AJAX、HTML、Etc.), etc.
In some embodiments, computer system 400 may implement a program component stored by web browser 408. Web browser 408 may be a hypertext viewing application, such asINTERNET/>GOOGLETM CHROMETM、/>Etc. Secure web browsing may be provided using secure hypertext transfer protocol (HTTPS), secure Sockets Layer (SSL), transport layer security protocol (TLS), and the like. Web browser 408 may utilize, for example, AJAX, DHTML,/>Application Programming Interfaces (APIs), and the like. In some embodiments, computer system 400 may implement a program component stored by a mail server. The mail server may be an internet mail server such as Microsoft Exchange or the like. Mail servers may utilize, for example, dynamic server pages (ASPs),/>, for exampleC++/C#、/>.NET、CGI SCRIPTS、/>PHP、And the like. Mail servers may utilize communication protocols such as Internet Message Access Protocol (IMAP), message Application Programming Interface (MAPI),/>Exchange servers, post Office Protocol (POP), simple Mail Transfer Protocol (SMTP), etc. In some embodiments, computer system 400 may implement a program component stored by a mail client. The mail client may be a mail viewing application, such as/>MAIL、 Etc.
Furthermore, embodiments consistent with the invention may be implemented using one or more computer-readable storage media. Computer-readable storage media refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with embodiments described herein. The term "computer-readable medium" should be taken to include tangible items and exclude carrier waves and transient signals, i.e., non-transient. Examples include Random Access Memory (RAM), read Only Memory (ROM), volatile memory, nonvolatile memory, hard disk drives, compact Disk (CD) ROMs, digital Video Disks (DVDs), flash memory drives, magnetic disks, and any other known physical storage medium.
The description of an embodiment with several components in communication with each other does not imply that all such components are required. Rather, the various optional components are described to illustrate the wide variety of possible embodiments of the present invention. When a single device or article is described herein, it will be readily apparent that more than one device/article (whether or not they cooperate) may be used in place of a single device/article. Similarly, where more than one device or article is described herein (whether or not they cooperate), it will be readily apparent that a single device/article may be used in place of the more than one device or article or a different number of devices/articles may be used in place of the illustrated number of devices or programs. The functions and/or features of a device may alternatively be embodied by one or more other devices that are not explicitly described as having such functions/features. Thus, other embodiments of the invention need not include the device itself.
The present specification has described methods and systems for lane tracking of an autonomous vehicle 101. The steps shown are set forth to illustrate the exemplary embodiments shown, and it is contemplated that ongoing technical development will change the manner in which specific functions are performed. These examples are presented herein for purposes of illustration and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries may be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc. of those described herein) will be apparent to those skilled in the relevant arts based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments. Furthermore, the terms "include," "have," "contain," and "contain" are intended to be synonymous and open ended, as any one or more of the terms followed by an item or items are not intended to be an exhaustive list of the item or items, or are intended to be limited to only the item or items listed. It must also be noted that, as used herein and in the appended claims, the singular forms "a," "an," and "the" include plural referents unless the context clearly dictates otherwise.
Finally, the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, the scope of the invention is not limited by the detailed description, but rather by any claims issued in accordance with the application hereof. Accordingly, the embodiments of the invention are intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
Reference numerals
Reference numerals Description
100 Architecture for a computer system
101 Autonomous vehicle
103 Lane tracking system
105 Lane boundary detection system
107 Processor and method for controlling the same
109 I/O interface
111 Memory device
113 One or more sensors
115 Image capturing apparatus
203 Data
205 Module
207 Training data
209 Data points of clothoid curve
211 Kalman filter parameter data
213 Reconstructing data
215 Other data
221 Receiving module
223 Coefficient value determination module
225 Kalman filter determination module
227 Reconstruction module
229 Learning module
231 Other modules
233 Exemplary multiple lane boundary detection points corresponding to measurements
235 Exemplary multiple lane boundary detection points corresponding to ground truth values
237 Exemplary Lane based on reconstruction clothoid Point tracking and modeling
400 Exemplary computer System
401 I/O interface for an exemplary computer system
402 Processor of exemplary computer System
403 Network interface
404 Storage interface
405 Memory of an exemplary computer system
406 User interface
407 Operating system
408 Web browser
409 Communication network
411 Input device
412 Output device

Claims (30)

1. A method of training a lane tracking system (103) for an autonomous vehicle (101), the method comprising:
A lane tracking system (103) receiving ground truth values corresponding to a plurality of lane boundary detection points and measured values corresponding to the plurality of lane boundary detection points from a lane boundary detection system (105) associated with the lane tracking system (103);
the lane tracking system (103) determines coefficient values of a clothoid parameter for a ground truth clothoid point and a measured clothoid point formed using a ground truth set and a measurement set, respectively, to model a lane boundary of a lane, wherein the ground truth set comprises a subset of consecutive lane boundary detection points and corresponding ground truth values, and the measurement set comprises a subset of consecutive lane boundary detection points and corresponding measurement values;
the lane tracking system (103) determines kalman filter parameters for coefficient values of the clothoid parameters determined for the measured clothoid points to track lane boundaries of the lane, wherein the kalman filter parameters are determined using at least one neural network;
The lane tracking system (103) uses the corresponding kalman filter parameters to update coefficient values of the clothoid parameters determined for the measured clothoid points;
The lane tracking system (103) reconstructs the measured clothoid points using the corresponding updated coefficient values of the clothoid parameters, wherein each reconstructed measured clothoid point enables the lane tracking system (103) to track lane boundaries of the autonomous vehicle (101); and
The lane tracking system (103) minimizes a training error based on a difference between the reconstructed measured clothoid points and the corresponding ground truth set in each cycle until the training error is below a predefined threshold.
2. The method of claim 1, wherein the plurality of lane boundary detection points correspond to left and right lane boundaries of the lane.
3. A method as claimed in any preceding claim, comprising: the lane tracking system (103) dynamically selects the ground truth set and the measurement set needed to generate the ground truth clothe point and the measurement clothe point, respectively.
4. A method as claimed in any preceding claim, comprising: the plurality of lane boundary detection points are determined using at least one of lane data received from one or more sensors (113) configured in the vehicle and image frames of the lane received in real-time, wherein the image frames of the lane are received from an image capture device (115) associated with the autonomous vehicle (101).
5. The method of claim 4, comprising: at least one of the lane data and the image frames of the lane are retrieved from a database configured to store the lane data and the image frames captured in real time.
6. The method of any preceding claim, wherein the clothoid parameters include an initial curvature (c o) of a lane boundary, a rate of curvature (c 1) of the lane boundary, and a heading angle (β) relative to a direction of travel of the vehicle.
7. The method of any preceding claim, wherein the at least one neural network comprises a neural network with time memory.
8. The method of any preceding claim, wherein the at least one neural network comprises a recurrent neural network.
9. The method of any preceding claim, wherein the at least one neural network comprises a long-term memory neural network.
10. The method of any preceding claim, wherein determining the kalman filter parameters using the at least one neural network comprises:
the lane tracking system (103) provides coefficient values of the clothoid parameters determined for the measured clothoid points as input to a first neural network, wherein the first neural network is trained based on historical coefficient values of the clothoid parameters determined for clothoid points formed using a historical measurement set and a ground truth set;
The lane tracking system (103) determines a measurement noise covariance matrix (R) using the first neural network, using coefficient values of the clothoid parameters determined for the measurement clothoid points;
The lane tracking system (103) predicts a state transition (Y p) of coefficient values of the clothoid parameters determined for the measured clothoid point from one image frame to another image frame based on a time difference between the speed of the autonomous vehicle (101) moving along the lane and successive image frames;
the lane tracking system (103) uses the predicted state transitions to determine a process noise covariance matrix (Q) as input to a second neural network, wherein the second neural network is trained using historical autonomous vehicle (101) speed values and time differences;
The lane tracking system (103) predicts an error covariance (P p) of the predicted state transition using the determined process noise covariance matrix (Q); and
The lane tracking system (103) determines a kalman filter parameter for coefficient values of the clothoid parameters determined for the measured clothoid points based on the predicted state transitions (Y p) and covariance (P p), the determined measured noise covariance matrix (R), and coefficient values of the clothoid parameters determined for the measured clothoid points.
11. The method of any preceding claim, further comprising: the lane tracking system (103) adds an initial lateral offset between the lane boundaries and the autonomous vehicle (101) to the reconstructed measured clothoid point.
12. A method for lane tracking of an autonomous vehicle (101), the method comprising:
a lane tracking system (103) receiving measurements corresponding to a plurality of lane boundary detection points from a lane boundary detection system (105) associated with the lane tracking system (103);
The lane tracking system (103) determines coefficient values of a clothoid parameter for a measured clothoid point formed using a measurement set to model a lane boundary of a lane, wherein the measurement set comprises a subset of consecutive lane boundary detection points and corresponding measurement values;
the lane tracking system (103) determines kalman filter parameters for coefficient values of the clothoid parameters determined for the measured clothoid points to track lane boundaries of the lane, wherein the kalman filter parameters are determined using at least one neural network;
the lane tracking system (103) uses the corresponding kalman filter parameters to update coefficient values of the clothoid parameters determined for the measured clothoid points; and
The lane tracking system (103) reconstructs the measured clothoid points using the corresponding updated coefficient values of the clothoid parameters, wherein each reconstructed measured clothoid point enables the lane tracking system (103) to track lane boundaries of the autonomous vehicle (101).
13. The method of claim 12, wherein the at least one neural network comprises a neural network having a time memory.
14. The method of any one of claims 12 to 13, wherein the at least one neural network comprises a recurrent neural network.
15. The method of any one of claims 12 to 14, wherein the at least one neural network comprises a long-term memory neural network.
16. A lane tracking system (103) for an autonomous vehicle (101), the lane tracking system (103) comprising:
a processor (107); and
A memory (111) communicatively coupled to the processor (107), wherein the memory (111) stores processor (107) instructions that, when executed, cause the processor (107) to train the lane tracking system (103), wherein, for training, the processor (107) is configured to:
Receiving ground truth values corresponding to a plurality of lane boundary detection points and measured values corresponding to the plurality of lane boundary detection points from a lane boundary detection system (105) associated with the lane tracking system (103);
Determining coefficient values of a clothoid parameter for a ground truth clothoid point and a measured clothoid point formed using a ground truth set and a measurement set, respectively, to model a lane boundary of a lane, wherein the ground truth set includes a subset of consecutive lane boundary detection points and corresponding ground truth values, and the measurement set includes a subset of consecutive lane boundary detection points and corresponding measurement values;
determining kalman filter parameters for coefficient values of the clothoid parameters determined for the measured clothoid points to track lane boundaries of the lane, wherein the kalman filter parameters are determined using at least one neural network;
Updating coefficient values of the clothoid parameters determined for the measured clothoid point using the corresponding kalman filter parameters;
Reconstructing the measured clothoid points using the corresponding updated coefficient values of the clothoid parameters, wherein each reconstructed measured clothoid point enables the lane tracking system (103) to track lane boundaries of the autonomous vehicle (101); and
The training error is minimized in each cycle based on the difference between the reconstructed measured clothoid points and the corresponding ground truth set until the training error is below a predefined threshold.
17. The lane tracking system (103) according to claim 16, wherein the plurality of lane boundary detection points correspond to left and right lane boundaries of the lane.
18. The lane tracking system (103) of any one of claims 16 to 17, wherein the processor (107) dynamically selects the ground truth set and the measurement set needed to generate the ground truth clothe point and the measurement clothe point, respectively.
19. The lane tracking system (103) of any one of claims 16 to 18, wherein the plurality of lane boundary detection points are determined using at least one of lane data received from one or more sensors (113) configured in the vehicle and image frames of the lane received in real-time, wherein the image frames of the lane are received from an image capture device (115) associated with the autonomous vehicle (101).
20. The lane tracking system (103) of claim 19, wherein the processor (107) retrieves at least one of the lane data and the image frames of the lane from a database configured to store the lane data and the image frames captured in real-time.
21. The lane tracking system (103) of any one of claims 16 to 20, wherein the clothoid parameters include an initial curvature (c o) of a lane boundary, a rate of curvature (c 1) of the lane boundary, and a heading angle (β) relative to a direction of travel of the vehicle.
22. The lane tracking system (103) of any one of claims 16 to 21, wherein to determine the kalman filter parameters using the at least one neural network, the processor (107) is configured to:
Providing coefficient values of the clothoid parameters determined for the measured clothoid points as input to a first neural network, wherein the first neural network is trained based on historical coefficient values of the clothoid parameters determined for clothoid points formed using a historical measurement set and a ground truth set;
Determining a measured noise covariance matrix (R) using the first neural network using coefficient values of the clothoid parameters determined for the measured clothoid points;
Predicting a state transition (Y p) of coefficient values of the clothoid parameters determined for the measured clothoid point from one image frame to another image frame based on a time difference between a speed of the autonomous vehicle (101) moving along the lane and successive image frames;
Determining a process noise covariance matrix (Q) as an input to a second neural network using the predicted state transitions, wherein the second neural network is trained using historical autonomous vehicle (101) speed values and time difference values;
Predicting an error covariance (P p) of the predicted state transition using the determined process noise covariance matrix (Q); and
A kalman filter parameter is determined for coefficient values of the clothoid parameters determined for the measured clothoid point based on the predicted state transitions (Y p) and covariance (P p), the determined measured noise covariance matrix (R), and coefficient values of the clothoid parameters determined for the measured clothoid point.
23. The lane tracking system (103) of any one of claims 16 to 22, wherein the processor (107) is further configured to add an initial lateral offset between the lane boundaries and the autonomous vehicle (101) to the reconstructed measured clothoid point.
24. The lane tracking system (103) of any one of claims 16 to 23, wherein the at least one neural network comprises a neural network with time memory.
25. The lane tracking system (103) of any one of claims 16 to 24, wherein the at least one neural network comprises a recurrent neural network.
26. The lane tracking system (103) of any one of claims 16 to 25, wherein the at least one neural network comprises a long and short term memory neural network.
27. A lane tracking system (103) for an autonomous vehicle (101), the lane tracking system (103) comprising:
a processor (107); and
A memory (111) communicatively coupled to the processor (107), wherein the memory (111) stores processor (107) instructions that, when executed, cause the processor (107) to:
Receiving measurements corresponding to a plurality of lane boundary detection points from a lane boundary detection system (105) associated with the lane tracking system (103);
Determining coefficient values of a clothoid parameter for a measured clothoid point formed using a measurement set to model a lane boundary of a lane, wherein the measurement set includes a subset of consecutive lane boundary detection points and corresponding measurement values;
determining kalman filter parameters for coefficient values of the clothoid parameters determined for the measured clothoid points to track lane boundaries of the lane, wherein the kalman filter parameters are determined using at least one neural network;
updating coefficient values of the clothoid parameters determined for the measured clothoid point using the corresponding kalman filter parameters; and
The measured clothoid points are reconstructed using the corresponding updated coefficient values of the clothoid parameters, wherein each reconstructed measured clothoid point enables the lane tracking system (103) to track lane boundaries of the autonomous vehicle (101).
28. The lane tracking system (103) according to claim 27, wherein the at least one neural network comprises a neural network with time memory.
29. The lane tracking system (103) of at least one of claims 27 to 28, wherein the at least one neural network comprises a recurrent neural network.
30. The lane tracking system (103) of any one of claims 27 to 29, wherein the at least one neural network comprises a long and short term memory neural network.
CN202280064485.3A 2021-09-27 2022-09-27 Method and system for lane tracking of autonomous vehicles Pending CN117980949A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
IN202141043726 2021-09-27
IN202141043726 2021-09-27
GB2117061.8 2021-11-26
PCT/EP2022/076759 WO2023046975A1 (en) 2021-09-27 2022-09-27 Method and system for lane tracking for an autonomous vehicle

Publications (1)

Publication Number Publication Date
CN117980949A true CN117980949A (en) 2024-05-03

Family

ID=79270344

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280064485.3A Pending CN117980949A (en) 2021-09-27 2022-09-27 Method and system for lane tracking of autonomous vehicles

Country Status (2)

Country Link
CN (1) CN117980949A (en)
GB (1) GB2611117A (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117020244B (en) * 2023-09-28 2024-01-12 季华实验室 Processing state monitoring method and device, electronic equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6718259B1 (en) * 2002-10-02 2004-04-06 Hrl Laboratories, Llc Adaptive Kalman filter method for accurate estimation of forward path geometry of an automobile
JP6492469B2 (en) * 2014-09-08 2019-04-03 株式会社豊田中央研究所 Own vehicle travel lane estimation device and program
EP3680877A1 (en) * 2019-01-08 2020-07-15 Visteon Global Technologies, Inc. Method for determining the location of an ego-vehicle

Also Published As

Publication number Publication date
GB2611117A (en) 2023-03-29
GB202117061D0 (en) 2022-01-12

Similar Documents

Publication Publication Date Title
Wu et al. Tracking vehicle trajectories and fuel rates in phantom traffic jams: Methodology and data
CN107169402B (en) Vehicle lane positioning
EP4152204A1 (en) Lane line detection method, and related apparatus
CN108334081A (en) Depth of round convolutional neural networks for object detection
CN108388834A (en) The object detection mapped using Recognition with Recurrent Neural Network and cascade nature
US11120538B2 (en) Sensor degradation detection and remediation
US20210287035A1 (en) Image and lidar segmentation for lidar-camera calibration
RU2743895C2 (en) Methods and systems for computer to determine presence of objects
US20130080359A1 (en) Assisting vehicle guidance over terrain
US11422559B2 (en) Method and system of navigating an autonomous vehicle at an intersection of roads
EP3825733A1 (en) Long range lidar-based speed estimation
US11567501B2 (en) Method and system for fusing occupancy maps
EP3696718A1 (en) Method and system for determining drivable road regions for safe navigation of an autonomous vehicle
US10837789B2 (en) Method and system for determining an optimal path for navigation of a vehicle
CN117980949A (en) Method and system for lane tracking of autonomous vehicles
US11391825B2 (en) Sensor calibration parameter sensitivity analysis
EP3575911B1 (en) Method and system for correcting velocity of autonomous vehicle to navigate along planned navigation path
US20220165072A1 (en) Method and system for detecting and classifying lanes
Bajic et al. Road roughness estimation using machine learning
US11442467B2 (en) Method and system for generating a trajectory for navigating an autonomous vehicle
CN111435457B (en) Method for classifying acquisitions acquired by sensors
EP3989031B1 (en) Systems and methods for fusing road friction data to enhance vehicle maneuvering
US11745766B2 (en) Unseen environment classification
US20220262103A1 (en) Computer-implemented method for testing conformance between real and synthetic images for machine learning
KR102323671B1 (en) Method and apparatus for detecting abnormal objects in video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination