WO2021217420A1 - 车道线跟踪方法和装置 - Google Patents

车道线跟踪方法和装置 Download PDF

Info

Publication number
WO2021217420A1
WO2021217420A1 PCT/CN2020/087506 CN2020087506W WO2021217420A1 WO 2021217420 A1 WO2021217420 A1 WO 2021217420A1 CN 2020087506 W CN2020087506 W CN 2020087506W WO 2021217420 A1 WO2021217420 A1 WO 2021217420A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
predicted value
lane line
vehicle
coordinate system
Prior art date
Application number
PCT/CN2020/087506
Other languages
English (en)
French (fr)
Inventor
袁维平
吴祖光
周鹏
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN202080006573.9A priority Critical patent/CN113168708B/zh
Priority to EP20933833.4A priority patent/EP4141736A4/en
Priority to PCT/CN2020/087506 priority patent/WO2021217420A1/zh
Publication of WO2021217420A1 publication Critical patent/WO2021217420A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/10Path keeping
    • B60W30/12Lane keeping
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/11Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/64Analysis of geometric attributes of convexity or concavity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Definitions

  • This application relates to the field of artificial intelligence, and in particular to a lane line tracking method and device.
  • Lane line detection and lane line tracking have received widespread attention in vehicle driving, especially automatic driving, and with the popularization of automatic driving, the requirements for the stability and accuracy of lane line tracking are getting higher and higher.
  • the existing lane line tracking scheme it is usually necessary to assume that the road surface is level and the lane lines are parallel.
  • uneven road surface is a very common phenomenon, and the lane lines in the road often appear non-parallel, such as an extra left-turn lane or an extra lane at a traffic light intersection.
  • Right-turning lanes are very common. For example, even if the same road is planned in the city, the number of lanes will be increased or decreased according to the actual situation. These will make some lane lines not parallel. If it is still based on the assumption that the road is horizontal The premise that the lane lines are parallel to each other will lead to larger errors. In short, the existing lane line tracking methods have low accuracy and lack of universality.
  • the present application provides a lane line tracking method and device, which can effectively improve the accuracy of lane line tracking on the one hand, and on the other hand, the method and device have good universality.
  • a method for lane line tracking includes: obtaining a first predicted value, which is used to represent a lane line model in a vehicle body coordinate system, and uses a vehicle at a previous moment Obtained by predicting motion information; acquiring first detection information, which includes the pixel points of the lane line at the current moment in the image coordinate system; determining the first mapping relationship according to the first prediction value and the first detection information, the first A mapping relationship is used to indicate a real-time mapping relationship between the image coordinate system and the vehicle body coordinate system; a second predicted value is determined according to the first mapping relationship, and the second predicted value is used to indicate a correction value of the first predicted value.
  • the predicted value and the detection information are used to obtain the real-time mapping relationship between the two coordinate systems.
  • This can eliminate the impact of road changes and improve the accuracy of lane line tracking.
  • the reason is that these road changes will bring about changes in the mapping relationship between the two coordinate systems, and real-time acquisition of the mapping relationship is equivalent to real-time capture of the mapping relationship
  • the effect of uneven road surface on the tracking results can be effectively eliminated, and a more accurate lane line prediction value can be obtained.
  • the detection information of the lane line may include pixel point information of the lane line, and may also include the line type of the lane line.
  • the current time is determined by the time corresponding to the image frame. For example, suppose the time corresponding to a certain frame of image is taken as the current time; the previous time means that the corresponding time in the image frame is earlier than the time corresponding to the image frame. The moment of the current moment, the previous moment may include the previous moment; the later moment refers to a moment that is later than the current moment in the image frame, and the later moment may include the next moment.
  • the detection information can be obtained in real time, or the aforementioned detection information can be obtained from a storage device.
  • the self-vehicle information of the vehicle can be understood as information used to describe the state and movement of the vehicle
  • the self-vehicle information can include the self-vehicle motion information of the vehicle
  • the self-vehicle motion information can be understood as dynamic information, such as It may include any one or more kinds of dynamic information such as the vehicle speed or angular velocity (for example, the yaw angular velocity) of the vehicle.
  • the first prediction value is obtained by using the vehicle motion information at the previous moment, which is equivalent to using the lane line prediction method to obtain the lane line prediction value in the vehicle body coordinate system at the previous moment.
  • the first prediction value and the first detection information are comprehensively considered to determine the second prediction value, and the current lane line prediction value and the current lane line detection information are used to determine the current time change It is an accurate prediction value (or can be understood as obtaining a more accurate lane line model).
  • Lane line prediction value and then use the lane line detection information in the image coordinate system at the next time and the lane line prediction value at the next time to determine the more accurate prediction value at the next time (or can be understood as obtaining more accurate Lane line model).
  • the first predicted value may be read from, for example, a storage device, or may be calculated using self-vehicle motion information, etc., and it may be generated at the previous time or at the current time, that is, In other words, it can be that the first prediction value is obtained after obtaining the information such as the vehicle motion information at the previous time before the current time; it can also be that only the vehicle motion information is stored at the previous time, and no calculation is performed. , Wait for the current moment to perform calculations, and so on, I won’t list them all here.
  • the lane line model can be understood as a mathematical model of the lane line, or can be understood as a way to represent the lane line.
  • a curve equation or a polynomial can be used to represent the lane line, so it can also be called a lane line equation, a lane line Curve equations, lane line polynomials, etc.
  • the lane line model may be represented by parameters such as the position (intercept), angle, curvature, rate of change of curvature, and radius of curvature of the lane line.
  • the lane line model can also be represented by parameters such as lane width, the position of the center of the vehicle deviating from the lane center, the angle of the lane line, and the curvature. It should be understood that there may also be other ways of expressing the lane line, as long as the relative position and change trend of the lane line can be expressed, and there is no limitation.
  • the lane line prediction method or the so-called lane line prediction algorithm can be used to obtain the vehicle body coordinates of the lane line at the current time based on the time interval between the current time and the previous time, and the vehicle information at the previous time.
  • the predicted value in the system is the first predicted value.
  • the lane line prediction method may use, for example, an IMM algorithm or called an IMM prediction method to obtain the above-mentioned prediction value of the lane line.
  • the lane line prediction algorithm can also be regarded as using the model of the lane line prediction algorithm to process some input data to obtain the predicted value of the lane line model.
  • the model of the IMM algorithm can be called the IMM prediction model , IMM algorithm model, etc. It can be seen that in the embodiments of the present application, the model of the lane line prediction algorithm and the lane line model are different concepts.
  • the model of the lane line prediction algorithm is a model of the prediction algorithm used to obtain the lane line model, and the lane line model is It refers to the curve equation of the lane line in the vehicle body coordinate system or is called a mathematical expression, so the predicted value of the lane line model can be understood as the predicted value of the lane line model obtained by the model of the lane line prediction algorithm.
  • the first predicted value may be obtained by using a model of a lane line prediction algorithm.
  • the second predicted value may be used to update the model of the lane line prediction algorithm (for example, the IMM algorithm) at the current moment.
  • Using the predicted value (second predicted value) of the more accurate lane line model to update the model of the lane line prediction algorithm in the vehicle body coordinate system is equivalent to updating the model parameters of the lane line prediction algorithm in the vehicle body coordinate system in real time. Accelerate the convergence of the prediction algorithm (for example, the IMM prediction algorithm) and improve the accuracy of the lane line model predicted by the model of the prediction algorithm.
  • the EKF filter can be used to filter the lane line model in the vehicle body coordinate system to improve the accuracy of the lane line model.
  • the model of the lane line prediction algorithm can also be updated according to the second predicted value.
  • a more accurate predicted value of the lane line model can be obtained.
  • the first prediction value is obtained when the time interval is within a preset range, that is to say, whether to perform lane line prediction is first determined according to the time interval. Only predict when the time interval is within the preset range. In the above process, by judging whether the time interval between the previous moment and the current moment is within a preset range, it is decided whether to make a prediction.
  • the state value of the filter can also be understood as the coefficient of the filter, or the state and parameters of the filter.
  • the time stamp can be understood as the time corresponding to the image frame.
  • the time with the later time stamp in the two frames of images is taken as the current time
  • the time with the earlier time stamp can be regarded as the previous time
  • the time with the later time stamp can also be regarded as the later time.
  • the earlier moment is regarded as the current moment.
  • the two frames of images may be continuous or not continuous. When the two frames of images are continuous, it can be considered that the two frames of images are the images of the previous moment and the current moment respectively, and it can also be considered that the two frames of images are the images of the current moment and the next moment respectively.
  • a homography matrix can be used to represent the mapping relationship between the vehicle body coordinate system and the image coordinate system.
  • regions can be divided, and homography matrices of different regions can be calculated.
  • the front of the vehicle can be divided into multiple regions according to distance, and multiple homography matrices corresponding to multiple regions can be obtained.
  • Each area in the area corresponds to at least one homography matrix.
  • it can be achieved by obtaining at least one real-time homography matrix, that is to say, calculating the homography matrix of different regions in real time, and using the real-time homography matrix to express the real-time mapping relationship ( For example, the first mapping relationship described above).
  • Dividing into multiple regions can more fully reflect the actual conditions of the road surface, thereby further improving the accuracy of the mapping relationship, thereby improving the accuracy of lane line tracking.
  • setting the homography matrix in different regions can get more accurate The mapping relationship between the two coordinate systems, thereby improving the accuracy of lane line tracking.
  • the initial homography matrix can be obtained first, and then according to the first prediction value and the initial homography matrix, the fourth prediction value corresponding to the first prediction value can be obtained.
  • the fourth prediction value can be understood as being used to represent the first prediction value.
  • the corresponding value of a predicted value under the initial mapping relationship may be understood as the corresponding value of the first predicted value in the initial image coordinate system. Since the initial homography matrix can be regarded as representing the mapping relationship between the vehicle body coordinate system plane and the initial image plane (initial image coordinate system), the fourth predicted value is equivalent to the first predicted value in the initial image plane. Corresponding value. It can also be seen as transferring the lane lines in the vehicle body coordinate system to the initial image plane.
  • the first detection information and the fourth predicted value are used to determine the homography matrix at the current moment. That is to say, when the road gradient changes, the initial mapping relationship between the vehicle body coordinate system and the image coordinate system has changed. At this time, the fourth predicted value and the first detection information in the image coordinate system at the current moment have changed. If there will be a deviation, you can obtain a more accurate homography matrix at the current moment (that is, a real-time homography matrix) by, for example, minimizing the difference between the two, so as to obtain the mapping relationship at the current moment ( That is, the real-time mapping relationship).
  • the real-time mapping relationship can be understood as a mapping relationship obtained in real time, that is, it can be understood as a mapping relationship that is continuously obtained as time advances, or can be understood as a mapping relationship corresponding to different moments.
  • the homography matrix at the current moment may be used to transfer the first predicted value from the vehicle body coordinate system to the image coordinate system.
  • the same method described above can be used to obtain the homography matrix of each region in the multiple regions, and then the multiple homography matrices can be used separately.
  • the performance matrix transfers the predicted value of the lane line to the image coordinate system regionally.
  • the initial homography matrix is equivalent to determining After the initial mapping relationship between the vehicle body coordinate system and the image coordinate system, the homography matrix at the current moment is equivalent to determining the real-time mapping relationship between the vehicle body coordinate system and the image coordinate system.
  • the difference between the corresponding value of the predicted value of the lane line model in the initial image coordinate system and the corresponding value in the current image coordinate system is used to construct and minimize the loss function, and iteratively Obtain the mapping relationship between the vehicle body coordinate system and the image coordinate system at the current moment, that is, the homography matrix at the current moment.
  • the loss function is constructed and minimized according to the difference between the corresponding value of the predicted value of the lane line model under the initial mapping relationship and the corresponding value under the mapping relationship at the current moment, and Obtain the mapping relationship at the current moment in an iterative manner.
  • the method for obtaining the homography matrix (or the mapping relationship at other times) at other moments can be the same as the method for obtaining the homography matrix at the current moment, which is equivalent to treating "other moments” as “current moments”. , Or it can be regarded as replacing the "current moment” in the above steps with “other moments”.
  • the first mapping relationship and the first predicted value are used to obtain a third predicted value
  • the third predicted value is used to represent the corresponding value of the first predicted value under the first mapping relationship.
  • the first mapping relationship (for example, using a real-time homography matrix) can be used to transfer the first predicted value to the image coordinate system at the current moment to obtain the third predicted value corresponding to the first predicted value.
  • the three-predicted value is the corresponding value of the first predicted value in the image coordinate system at the current moment determined according to the first predicted value and the first mapping relationship (for example, using a real-time homography matrix).
  • the third predicted value may be adjusted according to the first detection information, so as to obtain the second predicted value.
  • the lane line corresponding to the predicted value of the lane line model (third predicted value) transferred from the first predicted value to the image coordinate system at the current moment and the original lane line detection information in the image coordinate system (first Detection information) the Mahalanobis distance between the pixel points of the corresponding lane line, and use the predicted value of the lane line corresponding to the smallest Mahalanobis distance value as the second predicted value of the lane line model, which is the predicted value after correction .
  • this is equivalent to correlating at least one lane line in the vehicle body coordinate system with each lane line in the image coordinates, so that the information of the line with the smallest Mahalanobis distance is used as the current measurement.
  • two lane lines can be obtained in the vehicle body coordinate system, namely the left lane line and the right lane line, and there are three lane lines in the image coordinate system, then the three lane lines can be divided according to the above method
  • the 2 lane lines correspond to the above-mentioned left lane line and right lane line respectively.
  • the lane line model (lane line equation) in the vehicle body coordinate system can also be updated according to the second predicted value. Updating the lane line model may include updating the slope, intercept and other parameters of the lane line model (lane line curve equation).
  • using the second predicted value for path planning includes: obtaining path planning information, and generating the next time period or next time period or next time period based on the second predicted value and path planning information. A path planning plan at a moment.
  • the path planning information may include at least one of the following: road information, traffic information, or vehicle information; wherein the road information may include at least one of the following: roadblock information, road width information, or road length information; traffic information may include at least the following One: traffic light information, traffic regulation information, driving information or road condition information of other surrounding vehicles; self-vehicle information may include at least one of the following: self-vehicle motion information, position information, shape information, structure information, and self-vehicle motion information may include The angular velocity, speed, etc. of the vehicle.
  • Position information can be understood as the current position of the vehicle.
  • Shape information can be understood as the shape, shape, size, etc. of the vehicle.
  • Structural information can be understood as the various components of the vehicle, such as the front and the body .
  • the drivable area information at the current moment can also be obtained, so as to determine the lane-level route planning scheme for the next time period or the next time according to the second predicted value, the route planning information, and the drivable area information.
  • the lane line tracking method provided in the first aspect can be used to obtain a more accurate prediction value of the lane line model, thereby improving the accuracy of the planned path.
  • the second predicted value is used for early warning strategy planning, including: obtaining early warning information, and generating early warning signals based on the second predicted value, road information, and preset early warning thresholds ; Generate early warning strategy planning information according to the early warning signal, the early warning strategy planning information is used to indicate a response strategy to the early warning signal; the early warning information may include at least one of the following: vehicle location information, traffic information, and roadblock information.
  • the lane line tracking method provided in the first aspect can be used to obtain a more accurate prediction value of the lane line model, thereby improving the accuracy of the early warning.
  • a device for lane line tracking includes a unit for executing the method of any one of the foregoing first aspects.
  • a chip in a third aspect, includes a processor and a data interface, and the processor reads instructions stored in a memory through the data interface, and executes the method in any one of the above-mentioned implementations of the first aspect .
  • the chip may further include a memory in which instructions are stored, and the processor is configured to execute instructions stored on the memory.
  • the processor is configured to execute the method in any one of the implementation manners of the first aspect.
  • a computer-readable medium stores program code for device execution, and the program code includes a method for executing any one of the implementation manners of the first aspect.
  • a computer program product containing instructions is provided, when the computer program product is run on a computer, the computer is caused to execute the method in any one of the above-mentioned implementations of the first aspect.
  • Figure 1 is a schematic diagram of a method for establishing a vehicle body coordinate system.
  • Fig. 2 is a functional block diagram of a vehicle to which an embodiment of the present application is applied.
  • Fig. 3 is a schematic diagram of an automatic driving system according to an embodiment of the present application.
  • Fig. 4 is a schematic diagram of the application of a cloud-side command automatic driving vehicle according to an embodiment of the present application.
  • Fig. 5 is a schematic diagram of a lane line detection and tracking device according to an embodiment of the present application.
  • Fig. 6 is a schematic flowchart of a lane line detection method according to an embodiment of the present application.
  • Fig. 7 is a schematic flowchart of a lane line tracking method according to an embodiment of the present application.
  • FIG. 8 is a schematic flowchart of using IMM to predict lane lines according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a calculation process of a homography matrix according to an embodiment of the present application.
  • Fig. 10 is a schematic flowchart of a method for updating a lane line model in a vehicle body coordinate system according to an embodiment of the present application.
  • Lane line detection refers to a method of knowing the relative position of the lane line in the image, and can also be understood as knowing the coordinates (pixel points) of the lane line in the image. For example, in an image, after processing by a detection algorithm, the position of the lane line in the image can be obtained, or it is called obtaining the pixel points of the lane line.
  • the lane lines can be detected from the image plane through neural networks or traditional algorithms. In the embodiment of the present application, the introduction is focused on using a neural network to obtain the pixels of the lane line in the image plane as an example.
  • Lane line tracking is a method of processing the obtained lane lines, which can reduce the impact of missed detections and false detections.
  • Lane line detection is equivalent to detecting the lane line in each frame of image. When the vehicle continues to drive forward, it will continue to generate corresponding detection results. In actual situations, a certain frame or multiple frames of images may appear If there is no detection or detection error (such as detecting other lines as lane lines), it may make the lane line detected in the front and the lane line detected in the back fail to correspond or correspond to errors, and lane line tracking can be used To correct these errors.
  • detection or detection error such as detecting other lines as lane lines
  • the principle of lane line tracking is to abstract the lane line into a mathematical model or called a geometric model (such as the lane line model described below), by using real historical data (such as the vehicle motion information at the previous moment below) Obtain the predicted value of the lane line model (such as the first predicted value below), and then pass it through certain matching rules (such as the mapping relationship between the image coordinate system and the car body coordinate system below) and the lane detected in the image at the current moment Lines are matched, and those that can meet the matching rules are considered to be the same lane line.
  • lane line tracking can be seen as a process of continuously obtaining lane line models, and the obtained lane line models can be used to correct errors in lane line detection.
  • Figure 1 is a schematic diagram of establishing a vehicle body coordinate system.
  • a plane rectangular coordinate system is established in the vehicle body, where the X-axis points to the front of the car, and the Y-axis points from the co-pilot to the forward driving.
  • the directions of the X-axis and the Y-axis can also be opposite to the directions shown in the figure, for example, the Y-axis is directed from the front driver to the co-pilot.
  • the two curves in the vehicle body coordinate system are lane lines, and each lane line can be described by a cubic equation.
  • the curve equation can be:
  • a represents the curvature change of the lane line model
  • b represents the curvature of the lane line model
  • c represents the slope of the lane line model
  • d represents the intercept of the lane line model
  • the curve equation of the lane line can also be called the lane line equation, which can be regarded as an example of the lane line model described in the embodiment of the present application. That is to say, in this lane line tracking method, lane line tracking is achieved by obtaining the lane line equation in the vehicle body coordinate system.
  • the above two ways of outputting the results of lane line tracking can be regarded as a process of real-time and continuous correspondence between the detected lane line and the actual lane line. Or it can be understood that lane line tracking is a process of continuously obtaining a lane line model.
  • the lane line tracking method and device output the lane line model in the vehicle body coordinate system.
  • the image coordinate system is used to indicate the relative position of the lane line in the image, that is, the coordinate of the lane line in the image coordinate system is used to indicate the relative position of the lane line in the image. Therefore, the image coordinate system can be understood as a coordinate system established by using the plane where the image is located to indicate the relative position of the lane lines.
  • the vehicle body coordinate system is used to indicate the position of the lane line relative to the vehicle, that is, the coordinates of the lane line in the vehicle body coordinate system are used to indicate the position of the lane line relative to the vehicle. Therefore, the vehicle body coordinate system can be understood as a coordinate system established by the vehicle body to indicate the relative position of the lane line.
  • the vehicle body coordinate system may be established by the method described above.
  • the homography matrix is a mathematical concept in projective geometry or projection geometry. It is used to express the perspective transformation of a plane in the real world and its corresponding image, and to transform the image from one view to another through perspective transformation. view. Therefore, it can also be understood that the homography matrix can be used to realize the conversion between different planes.
  • the homography of a plane is defined as a projection mapping from one plane to another. Therefore, the mapping of points on a two-dimensional plane to the camera imager is an example of planar homography. If homogeneous coordinates are used to map a point on the calibration plate to a point on the imager, this mapping can be represented by a homography matrix.
  • the degree of freedom of the homography matrix is 8, that is, if you want to obtain a unique solution, you need 4 point pairs (corresponding to 8 equations) to solve the homography matrix.
  • the homography matrix is used to represent the mapping matrix between the vehicle body coordinate system and the image coordinate system, and can also be understood as used to represent the plane determined by the vehicle body coordinate system and the image coordinate system.
  • the mapping matrix between the planes is used to represent the mapping matrix between the vehicle body coordinate system and the image coordinate system.
  • the homography matrix is used to represent the mapping relationship between the car body coordinate system and the image coordinate system, and the homography matrix is acquired (updated) in real time to obtain (update) the homography matrix in real time. , Capture) the change of the mapping relationship between the vehicle body coordinate system and the image coordinate system.
  • the target has multiple motion states, and each motion state corresponds to a model.
  • the motion state of the target at any moment can be represented by a given model, and the filtering result of the target is more
  • the synthesis of the results of multiple filtering models for example, the weighted synthesis of the results of multiple filtering models. Since the maneuver of the target can be assumed to be applicable to different motion models at different stages, the IMM algorithm can be used to solve the problem in target tracking.
  • the IMM algorithm is used to track and predict the lane line, that is, in the embodiment of the present application, the lane line is equivalent to the above-mentioned "target".
  • Kalman filter is an algorithm that uses linear system state equations to optimally estimate the system state through system input and output observation data. Since the observation data includes the influence of noise and interference in the system, The optimal estimation can also be regarded as a filtering process.
  • EKF is a filtering method that can be used for nonlinear systems based on KF. EKF mainly linearizes the nonlinear system and then performs Kalman filtering. In the embodiment of the present application, the EKF is used to filter the lane lines in the vehicle body coordinate system to obtain the lane line model.
  • the lane line tracking method and/or device provided by the embodiments of the present application can be applied to vehicles and other vehicles.
  • these methods and/or devices can be applied to manual driving, assisted driving, and automatic driving. drive.
  • manual driving, assisted driving, and automatic driving. drive can be applied to vehicles and other vehicles.
  • assisted driving can be applied to manual driving, assisted driving, and automatic driving. drive.
  • Application scenario 1 Path planning/driving planning
  • the lane line tracking method provided in the embodiments of the present application can be applied to scenarios such as path planning/driving planning.
  • the lane line tracking results obtained by the lane line tracking method can be sent to the path planning module or device, so that the path planning module or device can The route is planned according to the lane line tracking result (for example, the second predicted value of the lane line described below or the lane line model in the vehicle body coordinate system described below, etc.), or an existing path planning scheme is adjusted.
  • the lane line tracking results obtained by the lane line tracking method can be sent to the driving planning module or device, so that the driving planning module or device can determine the vehicle's free space according to the lane line tracking results, or it can also be based on
  • the lane line tracking results are used to guide the next vehicle behavior, for example, it can be to generate an execution action in automatic driving, or it can be to guide the driver to drive.
  • the drivable area can also be called the passable space, which is a way of describing the surrounding environment of the vehicle.
  • the passable space of a vehicle generally contains information about other vehicles, pedestrians, and roadsides. Therefore, the passable space of the vehicle is mainly used to clearly describe the free space near the vehicle.
  • the lane line tracking method and/or device provided in this application embodiment can also be applied to a navigation system, and the vehicle obtains the lane line tracking result (for example, the second predicted value of the lane line described below or the vehicle body coordinate system described below) After the lane line model, etc.), the result can be reported to the navigation control module or device.
  • the navigation control module or device can instruct the driver to drive next based on the received lane line tracking results and other information such as road conditions. And/or can instruct the self-driving vehicle to generate corresponding execution actions, the navigation control module or device can be cloud equipment, server, terminal equipment and other data processing equipment, the navigation control module can also be an on-board module or device installed on the vehicle .
  • the lane line tracking result and traffic information can be combined to guide the vehicle to drive according to traffic rules. Especially when it is applied to automatic driving, it can control the automatic driving vehicle to drive to the correct lane according to the lane line tracking result, and pass the red street light intersection at the correct time.
  • plan lane-level navigation information for the vehicle in combination with lane line tracking results, road condition information, route information, etc.
  • the lane line tracking method and/or device provided in the embodiments of the present application can also be used for early warning strategy planning. For example, lane departure warning can be performed based on the lane line tracking result and the current position of the vehicle. When it is determined that the vehicle has deviated from the lane, a warning signal is issued. For further example, when the vehicle approaches or presses on the solid-line lane line, a warning is given by means of triggering an indicator light or making a sound. The alarm signal can also be sent to other decision-making or control modules to further control the driving of the vehicle.
  • collision warning can be carried out based on the results of lane line tracking and the surrounding conditions of the vehicle, and collision warning can be carried out based on the results of lane line tracking and other vehicles, pedestrians, obstacles, roadsides, buildings, and isolation zones. Strategic planning.
  • Fig. 2 is a functional block diagram of a vehicle to which an embodiment of the present application is applied.
  • the vehicle 100 may be a manually driven vehicle, or the vehicle 100 may be configured in a fully or partially automatic driving mode.
  • the vehicle 100 can control its own vehicle while in the automatic driving mode, and can determine the current state of the vehicle and its surrounding environment through human operations, determine the possible behavior of at least one other vehicle in the surrounding environment, and The confidence level corresponding to the possibility of other vehicles performing possible behaviors is determined, and the vehicle 100 is controlled based on the determined information.
  • the vehicle 100 can be placed to operate without human interaction.
  • the vehicle 100 may include various subsystems, such as a traveling system 110, a sensing system 120, a control system 130, one or more peripheral devices 140 and a power supply 160, a computer system 150, and a user interface 170.
  • a traveling system 110 a sensing system 120
  • a control system 130 a control system 130
  • peripheral devices 140 and a power supply 160 a computer system 150
  • a user interface 170 a user interface 170.
  • the vehicle 100 may include more or fewer subsystems, and each subsystem may include multiple elements.
  • each of the subsystems and elements of the vehicle 100 may be wired or wirelessly interconnected.
  • the travel system 110 may include components for providing power movement to the vehicle 100.
  • the travel system 110 may include an engine 111, a transmission 112, an energy source 113, and wheels 114/tires.
  • the engine 111 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations; for example, a hybrid engine composed of a gasoline engine and an electric motor, or a hybrid engine composed of an internal combustion engine and an air compression engine.
  • the engine 111 can convert the energy source 113 into mechanical energy.
  • the energy source 113 may include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other power sources.
  • the energy source 113 may also provide energy for other systems of the vehicle 100.
  • the transmission device 112 may include a gearbox, a differential, and a drive shaft; wherein, the transmission device 112 may transmit mechanical power from the engine 111 to the wheels 114.
  • the transmission device 112 may also include other devices, such as a clutch.
  • the drive shaft may include one or more shafts that can be coupled to one or more wheels 114.
  • the sensing system 120 may include several sensors that sense information about the environment around the vehicle 100.
  • the sensing system 120 may include a positioning system 121 (for example, a global positioning system (GPS), Beidou system or other positioning systems), an inertial measurement unit (IMU) 122, a radar 123, and a laser measurement system. Distance meter 124 and camera 125.
  • the sensing system 120 may also include sensors of the internal system of the monitored vehicle 100 (for example, an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). Sensor data from one or more of these sensors can be used to detect objects and their corresponding characteristics (position, shape, direction, speed, etc.). Such detection and identification are key functions for the safe operation of the autonomous vehicle 100.
  • the positioning system 121 can be used to estimate the geographic location of the vehicle 100.
  • the IMU 122 may be used to sense changes in the position and orientation of the vehicle 100 based on inertial acceleration.
  • the IMU 122 may be a combination of an accelerometer and a gyroscope.
  • the radar 123 may use radio signals to sense objects in the surrounding environment of the vehicle 100. In some embodiments, in addition to sensing the object, the radar 123 may also be used to sense the speed and/or direction of the object.
  • the laser rangefinder 124 may use laser light to sense objects in the environment where the vehicle 100 is located.
  • the laser rangefinder 124 may include one or more laser sources, laser scanners, and one or more detectors, as well as other system components.
  • the camera 125 may be used to capture multiple images of the surrounding environment of the vehicle 100.
  • the camera 125 may be a still camera or a video camera.
  • control system 130 controls the operation of the vehicle 100 and its components.
  • the control system 130 may include various elements, such as a steering system 131, a throttle 132, a braking unit 133, a computer vision system 134, a route control system 135, and an obstacle avoidance system 136.
  • the steering system 131 may be operated to adjust the forward direction of the vehicle 100.
  • it may be a steering wheel system in one embodiment.
  • the throttle 132 may be used to control the operating speed of the engine 111 and thereby control the speed of the vehicle 100.
  • the braking unit 133 may be used to control the deceleration of the vehicle 100; the braking unit 133 may use friction to slow down the wheels 114. In other embodiments, the braking unit 133 may convert the kinetic energy of the wheels 114 into electric current. The braking unit 133 may also take other forms to slow down the rotation speed of the wheels 114 to control the speed of the vehicle 100.
  • the computer vision system 134 may be operable to process and analyze the images captured by the camera 125 in order to identify objects and/or features in the surrounding environment of the vehicle 100.
  • the aforementioned objects and/or features may include traffic signals, road boundaries and obstacles.
  • the computer vision system 134 may use object recognition algorithms, structure from motion (SFM) algorithms, video tracking, and other computer vision technologies.
  • the computer vision system 134 may be used to map the environment, track objects, estimate the speed of objects, and so on.
  • the route control system 135 may be used to determine the travel route of the vehicle 100.
  • the route control system 135 may combine data from sensors, GPS, and one or more predetermined maps to determine a travel route for the vehicle 100.
  • the obstacle avoidance system 136 may be used to identify, evaluate, and avoid or otherwise cross potential obstacles in the environment of the vehicle 100.
  • the control system 130 may additionally or alternatively include components other than those shown and described. Alternatively, a part of the components shown above may be reduced.
  • it may include a path planning module for planning the driving path of the vehicle.
  • the path planning may be a road-level path planning or a lane-level path planning; for example, it may include an exercise planning module for determining the drivable area Or it can be used to guide the vehicle to drive according to traffic rules; it can also include a navigation control module to instruct the driver to drive next, and/or can instruct the autonomous vehicle to generate corresponding execution actions, etc.; it can also include an early warning strategy
  • the planning module is used to generate alarm signals for early warning strategy planning, so as to avoid potential safety hazards such as violating traffic rules.
  • the vehicle 100 can interact with external sensors, other vehicles, other computer systems, or users through a peripheral device 140; wherein, the peripheral device 140 can include a wireless communication system 141, an onboard computer 142, a microphone 143 and/ Or speaker 144.
  • the peripheral device 140 can include a wireless communication system 141, an onboard computer 142, a microphone 143 and/ Or speaker 144.
  • the peripheral device 140 may provide a means for the vehicle 100 to interact with the user interface 170.
  • the onboard computer 142 may provide information to the user of the vehicle 100.
  • the user interface 116 can also operate the onboard computer 142 to receive user input; the onboard computer 142 can be operated through a touch screen.
  • the peripheral device 140 may provide a means for the vehicle 100 to communicate with other devices located in the vehicle.
  • the microphone 143 may receive audio (eg, voice commands or other audio input) from the user of the vehicle 100.
  • the speaker 144 may output audio to the user of the vehicle 100.
  • the wireless communication system 141 may wirelessly communicate with one or more devices directly or via a communication network.
  • the wireless communication system 141 can use 3G cellular communication; for example, code division multiple access (CDMA), EVD0, global system for mobile communications (GSM)/general packet radio service (general packet radio service) packet radio service, GPRS), or 4G cellular communication, such as long term evolution (LTE); or, 5G cellular communication.
  • CDMA code division multiple access
  • EVD0 global system for mobile communications
  • GSM global system for mobile communications
  • general packet radio service general packet radio service
  • GPRS general packet radio service
  • 4G cellular communication such as long term evolution (LTE)
  • LTE long term evolution
  • 5G cellular communication 5G cellular communication.
  • the wireless communication system 141 can communicate with a wireless local area network (WLAN) by using wireless Internet access (WiFi).
  • WiFi wireless Internet access
  • the wireless communication system 141 may directly communicate with the device using an infrared link, Bluetooth, or ZigBee; other wireless protocols, such as various vehicle communication systems, for example, the wireless communication system 141 may include one or Multiple dedicated short range communications (DSRC) devices, these devices may include public and/or private data communications between vehicles and/or roadside stations.
  • DSRC dedicated short range communications
  • the power supply 160 may provide power to various components of the vehicle 100.
  • the power source 160 may be a rechargeable lithium ion battery or a lead-acid battery.
  • One or more battery packs of such batteries may be configured as a power source to provide power to various components of the vehicle 100.
  • the power source 160 and the energy source 113 may be implemented together, such as in some all-electric vehicles.
  • part or all of the functions of the vehicle 100 may be controlled by the computer system 150, where the computer system 150 may include at least one processor 151, and the processor 151 is executed in a non-transitory computer readable medium stored in the memory 152, for example.
  • the computer system 150 may also be multiple computing devices that control individual components or subsystems of the vehicle 100 in a distributed manner.
  • the processor 151 may be any conventional processor, such as a commercially available central processing unit (CPU).
  • CPU central processing unit
  • the processor may be a dedicated device such as an application specific integrated circuit (ASIC) or other hardware-based processor.
  • ASIC application specific integrated circuit
  • FIG. 2 functionally illustrates the processor, the memory, and other elements of the computer in the same block, those of ordinary skill in the art should understand that the processor, computer, or memory may or may not actually include Multiple processors, computers or memories in the same physical enclosure.
  • the memory may be a hard disk drive or other storage medium located in a housing other than the computer. Therefore, a reference to a processor or computer will be understood to include a reference to a collection of processors or computers or memories that may or may not operate in parallel. Rather than using a single processor to perform the steps described here, some components such as steering components and deceleration components may each have its own processor that only performs calculations related to component-specific functions .
  • the processor may be located away from the vehicle and wirelessly communicate with the vehicle.
  • some of the processes described herein are executed on a processor disposed in the vehicle and others are executed by a remote processor, including taking the necessary steps to perform a single manipulation.
  • the memory 152 may contain instructions 153 (eg, program logic), which may be executed by the processor 151 to perform various functions of the vehicle 100, including those functions described above.
  • the memory 152 may also contain additional instructions, for example, including sending data to, receiving data from, interacting with, and/or performing data to one or more of the traveling system 110, the sensing system 120, the control system 130, and the peripheral device 140. Control instructions.
  • the memory 152 may also store data, such as road maps, route information, the position, direction, and speed of the vehicle, and other such vehicle data, as well as other information. Such information may be used by the vehicle 100 and the computer system 150 during the operation of the vehicle 100 in autonomous, semi-autonomous, and/or manual modes.
  • the user interface 170 may be used to provide information to or receive information from a user of the vehicle 100.
  • the user interface 170 may include one or more input/output devices in the set of peripheral devices 140, for example, a wireless communication system 141, a car computer 142, a microphone 143, and a speaker 144.
  • the computer system 150 may control the functions of the vehicle 100 based on inputs received from various subsystems (for example, the traveling system 110, the sensing system 120, and the control system 130) and from the user interface 170.
  • the computer system 150 may use input from the control system 130 in order to control the braking unit 133 to avoid obstacles detected by the sensing system 120 and the obstacle avoidance system 136.
  • the computer system 150 is operable to provide control of many aspects of the vehicle 100 and its subsystems.
  • one or more of these components described above may be installed or associated with the vehicle 100 separately.
  • the storage 152 may exist partially or completely separately from the vehicle 100.
  • the above-mentioned components may be communicatively coupled together in a wired and/or wireless manner.
  • FIG. 2 should not be construed as a limitation to the embodiment of the present application.
  • the vehicle 100 may be an autonomous vehicle traveling on a road, and may recognize objects in its surrounding environment to determine the adjustment to the current speed.
  • the object may be other vehicles, traffic control equipment, or other types of objects.
  • each recognized object can be considered independently, and based on the respective characteristics of the object, such as its current speed, acceleration, distance from the vehicle, etc., can be used to determine the speed to be adjusted by the self-driving car.
  • the vehicle 100 or a computing device associated with the vehicle 100 may be based on the characteristics of the identified object and the state of the surrounding environment (for example, traffic, Rain, ice on the road, etc.) to predict the behavior of the identified object.
  • each recognized object depends on each other's behavior. Therefore, all recognized objects can also be considered together to predict the behavior of a single recognized object.
  • the vehicle 100 can adjust its speed based on the predicted behavior of the identified object.
  • the self-driving car can determine based on the predicted behavior of the object that the vehicle will need to be adjusted (e.g., accelerate, decelerate, or stop) to a stable state.
  • other factors may also be considered to determine the speed of the vehicle 100, such as the lateral position of the vehicle 100 on the road on which it is traveling, the curvature of the road, the proximity of static and dynamic objects, and so on.
  • the computing device can also provide instructions to modify the steering angle of the vehicle 100 so that the self-driving car follows a given trajectory and/or maintains an object near the self-driving car (for example, , The safe horizontal and vertical distances of cars in adjacent lanes on the road.
  • the above-mentioned vehicle 100 may be a car, truck, motorcycle, bus, boat, airplane, helicopter, lawn mower, recreational vehicle, playground vehicle, construction equipment, tram, golf cart, train, and trolley, etc.
  • the application examples are not particularly limited.
  • the lane line tracking results can be transmitted to the route control system 135, and the route control system 135 can generate corresponding instructions, and pass the steering system 131, throttle 132, brake unit 133, etc. in the travel system 110 or the control system 130. Execute the corresponding instructions to control the next behavior of the vehicle.
  • the tracking results can also be transmitted to other modules of the control system 130, such as path planning module, exercise planning module, navigation control module, or early warning strategy planning module, and any one or more modules (not shown in the figure) ), so that the above modules can implement corresponding functions based on the lane line tracking results and combining other information, such as generating planned routes, determining travelable areas, generating navigation instructions, or generating early warning signals.
  • the vehicle 100 shown in FIG. 2 may be an automatic driving vehicle, and the automatic driving system will be described in detail below.
  • Fig. 3 is a schematic diagram of an automatic driving system according to an embodiment of the present application.
  • the automatic driving system shown in FIG. 3 includes a computer system 201, where the computer system 201 includes a processor 203, and the processor 203 is coupled to a system bus 205.
  • the processor 203 may be one or more processors, where each processor may include one or more processor cores.
  • the display adapter 207 (video adapter) can drive the display 209, and the display 209 is coupled to the system bus 205.
  • the system bus 205 may be coupled to an input/output (I/O) bus 213 through a bus bridge 211, and an I/O interface 215 is coupled to an I/O bus.
  • I/O input/output
  • the I/O interface 215 communicates with a variety of I/O devices, such as input devices 217 (such as keyboard, mouse, touch screen, etc.), media tray 221, (such as CD-ROM, multimedia interface, etc.) .
  • the transceiver 223 can send and/or receive radio communication signals, and the camera 255 can capture landscape and dynamic digital video images.
  • the interface connected to the I/O interface 215 may be the USB port 225.
  • the processor 203 may be any traditional processor, such as a reduced instruction set computer (RISC) processor, a complex instruction set computer (CISC) processor, or a combination of the foregoing.
  • RISC reduced instruction set computer
  • CISC complex instruction set computer
  • the processor 203 may be a dedicated device such as an application specific integrated circuit (ASIC); the processor 203 may be a neural network processor or a combination of a neural network processor and the above-mentioned traditional processors.
  • ASIC application specific integrated circuit
  • the computer system 201 may be located far away from the autonomous driving vehicle, and may wirelessly communicate with the autonomous driving vehicle.
  • some of the processes described in this application are executed on a processor provided in an autonomous vehicle, and others are executed by a remote processor, including taking actions required to perform a single manipulation.
  • the computer system 201 can communicate with the software deployment server 249 through the network interface 229.
  • the network interface 229 may be a hardware network interface, such as a network card.
  • the network 227 may be an external network, such as the Internet, or an internal network, such as an Ethernet or a virtual private network (VPN).
  • the network 227 may also be a wireless network, such as a WiFi network, a cellular network, and so on.
  • the kernel 241 may be composed of those parts of the operating system that are used to manage memory, files, peripherals, and system resources. Directly interact with the hardware.
  • the operating system kernel usually runs processes and provides inter-process communication, providing CPU time slice management, interrupts, memory management, IO management, and so on.
  • Application programs 243 include programs that control auto-driving cars, such as programs that manage the interaction between autonomous vehicles and obstacles on the road, programs that control the route or speed of autonomous vehicles, and programs that control interaction between autonomous vehicles and other autonomous vehicles on the road. .
  • the application program 243 also exists on the system of the software deployment server 249. In one embodiment, the computer system 201 may download the application program from the software deployment server 249 when the automatic driving-related program 247 needs to be executed.
  • the application program 243 may also be a program that interacts with an autonomous vehicle and a lane line on the road, that is, a program that can track lane lines in real time.
  • the application program 243 may also be a program for controlling an automatic driving vehicle to perform automatic parking.
  • the senor 253 may be associated with the computer system 201, and the sensor 253 may be used to detect the environment around the computer 201.
  • the senor 253 can detect a lane on the road, for example, can detect a lane line, and can track a lane line change within a certain range in front of the vehicle in real time when the vehicle is moving (such as driving).
  • the sensor 253 can detect animals, cars, obstacles, and crosswalks.
  • the sensor can also detect the environment surrounding the animals, cars, obstacles, and crosswalks, such as: the environment around the animals, for example, the surrounding environment Other animals, weather conditions, brightness of the surrounding environment, etc.
  • the senor may be a camera, an infrared sensor, a chemical detector, a microphone, etc.
  • the senor 253 may be used to detect the lane line in front of the vehicle, so that the vehicle can perceive lane changes during travel, so as to plan and adjust the driving of the vehicle in real time.
  • the senor 253 can be used to detect the size or position of the storage space and surrounding obstacles around the vehicle, so that the vehicle can perceive the distance between the storage space and the surrounding obstacles, and when parking Carry out collision detection to prevent collisions between vehicles and obstacles.
  • the computer system 150 shown in FIG. 2 may also receive information from other computer systems or transfer information to other computer systems.
  • the sensor data collected from the sensor system 120 of the vehicle 100 may be transferred to another computer to process the data.
  • FIG. 4 is a schematic diagram of the application of a cloud-side command automatic driving vehicle according to an embodiment of the present application.
  • the data from the computer system 312 may be transmitted to the server 320 on the cloud side via the network for further processing.
  • the network and intermediate nodes may include various configurations and protocols, including the Internet, the World Wide Web, Intranet, virtual private network, wide area network, local area network, private network using one or more company’s proprietary communication protocols, Ethernet, WiFi and HTTP, And various combinations of the foregoing; this communication can be by any device capable of transferring data to and from other computers, such as modems and wireless interfaces.
  • the server 320 may include a server with multiple computers, such as a load balancing server group, which exchanges information with different nodes of the network for the purpose of receiving, processing, and transmitting data from the computer system 312.
  • the server may be configured similarly to the computer system 312, with a processor 330, a memory 340, instructions 350, and data 360.
  • the information related to the road conditions around the vehicle includes information about other vehicles around the vehicle and obstacle information.
  • the current lane line detection and lane line tracking often need to use the principle of plane assumptions, for example, the autonomous vehicle is in a flat area, there is no slope, etc.; and then the lane line in the image coordinate system is compared with the self-driving vehicle's own vehicle coordinates Correspond to the lane lines in the department to complete the spatial positioning of the lanes.
  • the plane assumption that is, the assumption that the road on which the vehicle is located is completely flat and there is no ramp, etc., which is often difficult to achieve in the real world, resulting in low accuracy of lane positioning.
  • the road surface conditions change in real time when the vehicle is traveling. Therefore, the above method cannot adapt to the constantly changing road surface conditions.
  • the current lane line detection and lane line tracking often need to assume that the lane lines on the road are parallel to each other, but the number of lane lines often changes near the intersection, for example, one or more lane lines are added at the intersection.
  • the lane line does not meet the premise that the lane lines are parallel to each other, making it impossible to accurately locate the lane.
  • the mapping relationship between the default car body coordinate system and the image coordinate system is immutable, that is, the default car body coordinate system and image
  • the coordinate systems are always parallel, so when the predicted value of the lane line model obtained in the vehicle body coordinate system is matched with the detected lane line in the image coordinate system through this unchanging mapping relationship, there will be a comparison. Large deviation, but the prior art scheme ignores this deviation, resulting in a large error in the tracking result of the lane line. In the scheme of the embodiment of the present application, this deviation can be obtained and eliminated, thereby improving the accuracy of lane line tracking. sex.
  • a method and device for tracking lane lines which eliminates the impact of uneven road surface by acquiring the mapping relationship between the image coordinate system and the vehicle body coordinate system in real time.
  • the mapping relationship can capture changes in the mapping relationship between the image coordinate system and the vehicle body coordinate system, and obtain a more accurate mapping relationship, thereby eliminating the impact of uneven road surfaces.
  • a homography matrix is also used to represent the mapping relationship between the image coordinate system and the vehicle body coordinate system
  • a real-time homography matrix is used to represent the real-time mapping relationship, and through the integrated image coordinate system
  • the obtained lane line detection information and the predicted value of the lane line model in the vehicle body coordinate system are used to determine the lane line tracking result, thereby eliminating the impact of uneven road surface on lane line tracking and improving the accuracy of lane line tracking.
  • the solution of the embodiment of the present application is to track each lane line separately, it does not need to be parallel to the lane line, it can be applied to any urban road scene, and the tracking accuracy of the lane is also improved, and it is universal It is suitable for all urban roads.
  • Fig. 5 is a schematic diagram of a lane line detection and tracking device according to an embodiment of the present application.
  • the device 500 may be used to perform a lane line detection process and/or be used to perform a lane line tracking process.
  • the device 500 may include a perception module 510, a lane line detection module 520, and a lane line tracking module 530.
  • the lane line detection module 520 may be used to implement lane line detection in a pixel coordinate system (image coordinate system), and the module may be composed of a deep learning network for acquiring the line type and pixel points of the lane line.
  • the lane line detection module 520 can receive the image from the perception module 510, and perform feature extraction and other processing on the image to obtain the feature information of the lane line, and then classify and extract the pixels according to the feature information of the lane line to obtain the lane.
  • the line type of the line and the pixel points of the lane line may be used to implement lane line detection in a pixel coordinate system (image coordinate system), and the module may be composed of a deep learning network for acquiring the line type and pixel points of the lane line.
  • the above line type can be understood as the type of lane line, such as solid line, dashed line, double line, etc., which are common lane line types on urban roads.
  • the above examples are only examples of several line types provided, and there is no limitation.
  • other line types can be used, that is, as long as it is an existing lane line type on the road or the future.
  • the lane line types that may be newly generated according to policy changes can be used as examples of the lane line types described in this application.
  • the above-mentioned pixel points of the lane line can be understood as the coordinates of the lane line in the pixel coordinate system, and can also be understood as the relative position of the lane line in the image.
  • the lane line tracking module 530 can be used to obtain lane line types and pixels from the lane line detection module 510, and can also be used to obtain self-vehicle information from other modules.
  • the self-vehicle information of a vehicle can be understood as information used to describe the state and movement of the vehicle, the self-vehicle information can include the self-vehicle motion information of the vehicle, and the self-vehicle motion information can be understood as dynamic information, for example, it can include the vehicle speed or Any one or more kinds of dynamic information such as angular velocity.
  • the lane line tracking module 530 may predict the lane line and/or update the lane line model according to one or more of the above-mentioned lane line type, lane line pixel points, self-vehicle information, and the like.
  • the lane line tracking module 530 may be used to obtain the predicted value of the lane line at the next time by using a lane line prediction method (for example, the IMM algorithm described below) according to the self-vehicle information at the current time.
  • the homography matrix is calculated using the predicted value of the lane line at the next moment and the pixel points of the lane line obtained at the next moment.
  • the prediction value of the lane line at the next moment is transferred to the image using the homography matrix, and the lane line tracking result is determined according to the pixel points of the lane line and the predicted value of the lane line obtained at the next time.
  • the lane line tracking result may be the final lane line prediction value determined by using the lane line prediction value (for example, the second prediction value below).
  • the lane line tracking results output by the lane line tracking module 530 can also be transmitted to other modules, for example, can be transmitted to the drivable area detection module to obtain a more accurate drivable area, or, for example, can be transmitted to the traffic rule control module. It is used to control vehicles not to violate traffic rules, etc., and for example, it can be used in each of the above application scenarios, which will not be listed here.
  • the modules shown in FIG. 5 are only logically divided, and there are no restrictions on the division method, and other division methods may also be used.
  • the lane line detection module 520 can be used as a part of the lane line tracking module 530, which is equivalent to setting an acquisition module in the lane line tracking module 530 to obtain the line type and pixel points of the lane line;
  • the perception module 510 can be used as A module independent of the device 500, that is, the device 500 only needs to be able to obtain images to be processed, and does not need to capture images in real time; for example, both the sensing module 510 and the lane detection module 520 can be used as modules independent of the device 500, That is to say, the lane detection module 530 only needs to be able to obtain the detection information of the lane line (such as the above-mentioned line type and pixel points); for example, the sensing module 510 and the lane detection module 520 can be integrated in the lane line tracking module 530.
  • Fig. 6 is a schematic flowchart of a lane line detection method according to an embodiment of the present application.
  • the detection method shown in FIG. 6 may be executed by the vehicle shown in FIG. 2, or the automatic driving system shown in FIG. 3, or the detection system shown in FIG. 5.
  • it may be executed by the lane line detection module 520 in FIG.
  • an acquisition module may also be set in the lane line tracking module 530 and executed by the acquisition module.
  • the steps of the method shown in FIG. 6 are introduced below.
  • the image to be processed may be obtained by using a camera or camera, etc., may be obtained in real time, or may be read from a storage module or device or equipment.
  • the lane line feature information refers to information that can express the characteristics of the lane line.
  • a neural network-based method may be used to classify the lane lines to determine the line type of the lane lines.
  • the line type refers to the types of lane lines that may exist in the road.
  • lane lines include solid lines, dashed lines, double lines, and so on.
  • no distinction is made as to whether the lane line is a straight line or a curve, because the lane line can be represented by a curve, and a straight line can be regarded as a special case of a curve.
  • a regression method or an instance segmentation method may be used to obtain the lane line pixels in the image.
  • the regression method is equivalent to point-by-point extraction of lane lines in the image, that is, points that can represent lane lines can be obtained.
  • Instance segmentation can be understood as recognizing the contours of objects at the pixel level, which is equivalent to segmenting the lane lines in the image in the form of bar boxes. That is to say, using the instance segmentation method can get some bar boxes that can represent lane lines.
  • the lane line detection method shown in Figure 6 can be used to obtain lane line detection information in the pixel coordinate system (image coordinate system), such as the line type and pixel points of the lane line.
  • the detection information can be used as the input data of the lane line tracking method. That is to say, the detection information of the lane line can be understood as the information of the lane line in the image coordinate system obtained by the lane line detection method, and the detection information can include at least one of line type and pixel point.
  • Fig. 7 is a schematic flowchart of a lane line tracking method according to an embodiment of the present application. The following describes the steps shown in FIG. 7.
  • the self-vehicle information of a vehicle can be understood as information used to describe the state and movement of the vehicle, the self-vehicle information can include the self-vehicle motion information of the vehicle, and the self-vehicle motion information can be understood as dynamic information, for example, it can include the vehicle speed or Any one or more kinds of dynamic information such as angular velocity (for example, yaw angular velocity).
  • the relevant device or module provided above can be used to obtain the vehicle's own vehicle information in real-time or read from a storage device.
  • the first predicted value may be read from, for example, a storage device, or may be calculated using self-vehicle motion information, etc., and it may be generated at the previous time or at the current time, that is, In other words, it can be that the first prediction value is obtained after obtaining the information such as the vehicle motion information at the previous time before the current time; it can also be that only the vehicle motion information is stored at the previous time, and no calculation is performed. , Wait for the current moment to perform calculations, and so on, I won’t list them all here.
  • the first predicted value can be obtained using the following method.
  • the lane line model can be understood as a mathematical model of a lane line, or can be understood as an expression representing a lane line.
  • the curve equation described above can be used to represent the lane line, or a polynomial can be used to represent it.
  • Lane line so it can also be called lane line equation, lane line curve equation, lane line polynomial, etc.
  • the lane line model may be represented by parameters such as the position (intercept), angle, curvature, rate of change of curvature, and radius of curvature of the lane line.
  • the lane line model can also be represented by parameters such as lane width, the position of the center of the vehicle deviating from the lane center, the angle of the lane line, and the curvature. It should be understood that there may also be other ways of expressing the lane line, as long as the relative position and change trend of the lane line can be expressed, and there is no limitation.
  • the lane line prediction method or the so-called lane line prediction algorithm can be used to obtain the lane line at the current time in the vehicle body coordinate system according to the time interval between the current time and the previous time, and the vehicle information at the previous time.
  • the predicted value of is the first predicted value.
  • the lane line prediction method may use, for example, an IMM algorithm or called an IMM prediction method to obtain the above-mentioned prediction value of the lane line.
  • the lane line prediction algorithm can also be regarded as using the model of the lane line prediction algorithm to process some input data to obtain the predicted value of the lane line model.
  • the model of the IMM algorithm can be called the IMM prediction model , IMM algorithm model, etc. It can be seen that in the embodiments of the present application, the model of the lane line prediction algorithm and the lane line model are different concepts.
  • the model of the lane line prediction algorithm is the model of the prediction algorithm used to obtain the predicted value of the lane line model.
  • the line model refers to the curve equation or mathematical expression of the lane line in the vehicle body coordinate system.
  • Fig. 8 is a schematic flow chart of using IMM to predict lane lines according to an embodiment of the present application.
  • the current time lane line is obtained by using the lane line prediction method provided by the embodiment of the present application.
  • the predicted value in the coordinate system is introduced.
  • the EKF filter can be used to filter the lane line model in the vehicle body coordinate system to improve the accuracy of the lane line model.
  • determine whether prediction is required according to the time interval determined by the time stamp of the input image that is, determine whether the state value of the filter needs to be changed according to the time interval determined by the time corresponding to the two frames before and after the input image , So as to obtain the predicted value of the lane line.
  • the state value of the filter can also be understood as the coefficient of the filter, or the state and parameters of the filter.
  • the filter when the time interval determined by the two frames of images satisfies the following formula, the filter is set to be updateable or referred to as a working mode, which can also be referred to as a start filter.
  • MaxLoopTime represents the maximum time interval threshold, for example, it can be set to 200 milliseconds (millisecond, ms);
  • MinLoopTime represents the minimum time interval threshold, for example, it can be set It is 10ms.
  • the time with the later time stamp in the two frames of images is taken as the current time
  • the time with the earlier time stamp can be regarded as the previous time
  • the time with the later time stamp can also be regarded as the later time.
  • the earlier moment is regarded as the current moment.
  • the two frames of images may be continuous or not continuous. When the two frames of images are continuous, it can be considered that the two frames of images are the images of the previous moment and the current moment respectively, and it can also be considered that the two frames of images are the images of the current moment and the next moment respectively.
  • step 901 it is determined whether the time interval between the previous moment (for example, the above T(t-1)) and the current moment (for example, the above T(t)) is within a preset range in step 901 to determine whether Make predictions.
  • the reason is that when the time interval is too long, corresponding to the actual vehicle may have traveled a long distance, the lane line may have changed a lot, or it may not be driving on the same road. At this time, this This kind of prediction is likely to cause large errors and so on.
  • the time interval is too small, it corresponds to the actual equivalent that the vehicle may hardly move forward.
  • the upper limit of the foregoing preset range can be set to other values different from the 200ms in the foregoing example according to experience or experiments.
  • the lower limit of the set range can also be set to a value other than 10ms in the above example based on experience or experimentation, and will not be repeated here.
  • the state space may be used to represent the tracker (filter) in the lane line tracking method.
  • the state space of the tracker includes at least one of the following parameters: the curvature change of the lane line model (lane line curve equation), the curvature of the lane line, the slope of the tangent of the lane line at the origin of the vehicle body coordinate system, and the lane line on the vehicle body The offset at the origin of the coordinate system. For example, suppose you need to track a total of 4 lane lines, left-left lane line, left lane line, right lane line, and right-right lane line, you can use the following formula to construct the state space X of the tracker, where X is a 9-dimensional vector .
  • curve change represents the curvature change of the lane line model
  • curve represents the curvature of the lane line
  • slope represents the slope of the tangent line of the lane line at the origin of the vehicle body coordinate system
  • offset leftneighbor , offset left , offset right , and offset rightneighbor respectively indicate the left
  • pitch represents the pitch angle
  • yaw represents the yaw angle.
  • the following equation can be used to update the state of the tracker (filter).
  • t-1) represents the state vector/matrix formed by the state of the filter at time t-1, which can also be called the state vector/matrix formed by the coefficients of the filter.
  • t-1) represents the covariance matrix at time t-1;
  • Q represents the covariance of system noise Matrix,
  • q represents the noise matrix/vector of the system, as in equation (8) above. That is to say, the covariance matrix of noise is constructed using the noise matrix/vector of the system.
  • step 902 is mainly based on the time interval (for example, the above ⁇ T) and the previous time (for example, the above time t-1) and the current time (for example, the above time t).
  • the own vehicle information for example, the above-mentioned vehicle speed, angular velocity
  • update the filter state at the current moment so as to obtain the current lane line prediction value.
  • the detection information of the lane line may include pixel point information of the lane line, and may also include the line type of the lane line.
  • the first detection information may include the pixel points of the lane line at the current time in the image coordinate system, and may also include the line type of the lane line at the current time.
  • the current time is determined by the time corresponding to the image frame. For example, suppose that the time corresponding to a certain frame of image is taken as the current time; the previous time refers to the time corresponding to the image frame earlier. At the current moment, the previous moment may include the previous moment; the later moment refers to a moment that is later than the current moment in the image frame, and the later moment may include the next moment.
  • step 701 and step 702 may be executed at the same time or at different times, and the order of execution is not limited, and the method of obtaining may also be the same or different.
  • the first mapping relationship is used to represent the real-time mapping relationship between the image coordinate system and the vehicle body coordinate system.
  • the first mapping relationship is the difference between the image coordinate system and the vehicle body coordinate system at the current moment. The mapping relationship between.
  • the self-vehicle information of a vehicle can be understood as information used to describe the state and movement of the vehicle, the self-vehicle information can include the self-vehicle motion information of the vehicle, and the self-vehicle motion information can be understood as dynamic information, for example, it can include the vehicle speed or Any one or more kinds of dynamic information such as angular velocity (for example, yaw angular velocity).
  • the relevant device or module provided above can be used to obtain the vehicle's own vehicle information in real-time or read from a storage device.
  • the lane line prediction value at the time, and then the lane line detection information in the image coordinate system at the next time and the lane line prediction value at the next time are used to determine the more accurate prediction value at the next time (or can be understood as obtaining more For accurate lane line model).
  • a homography matrix can be used to describe the mapping relationship between the image coordinate system and the vehicle body coordinate system, and the homography matrix at a certain time can be used to represent the image coordinate system and The mapping relationship between the vehicle body coordinate systems, that is, the real-time homography matrix can be used to express the real-time mapping relationship.
  • the initial homography matrix including: determine the position of the vehicle in the image, and determine multiple known landmarks in the vehicle body coordinate system in front of the vehicle, according to the vehicle body and multiple landmarks in the image
  • the coordinate information is used to calculate the homography matrix.
  • the initial homography matrix can also be understood as the initial value of the homography matrix, which is used to represent the initial value of the homography relationship between the car body coordinate and the image coordinate system.
  • the initial homography matrix does not need to be re-acquired every time, that is, in the lane line tracking process, the initial homography matrix can be used continuously for a period of time after the first acquisition.
  • the initial homography matrix is used to represent the initial mapping relationship, that is, the initial value of the mapping relationship between the image coordinate system and the vehicle body coordinate system.
  • the homography matrix at the current moment is calculated, and the lane line prediction value (such as the first prediction value) in the vehicle body coordinate system can be transferred to the image plane (the plane determined by the image coordinate system) according to the initial homography matrix. ), match the lane line in the image plane (for example, the first detection information), then minimize the matching error, and finally obtain the homography matrix of the area in front of the car through an iterative method.
  • the homography matrix can also be calculated in real-time by region. In the following, in conjunction with Figure 9, the calculation of three homography matrices is taken as an example to introduce the method of obtaining the homography matrix in real time. It should be understood that when only one homography matrix is set, the method shown in Figure 9 is the same Be applicable.
  • FIG. 9 is a schematic diagram of a calculation process of a homography matrix according to an embodiment of the present application. The steps shown in Figure 9 are described below.
  • regions can be divided, and homography matrices of different regions can be calculated.
  • the front of the vehicle can be divided into multiple regions according to distance, and multiple homography matrices corresponding to multiple regions can be obtained. Each area in the area corresponds to at least one homography matrix.
  • the front of the vehicle can be divided into three areas according to practice or based on test data, which are 0-20m, 20m-50m, and 50m-100m, and these three areas can be respectively corresponding to three homography matrices H0, H1, and H2, that is, 0-20m corresponds to matrix H0, 20m-50m corresponds to matrix H1, and 50m-100m corresponds to matrix H2.
  • the initial homography matrix does not need to be re-obtained every time, it can be a period of time such as a week, a few weeks, a month, a few months, etc., and it is calibrated once using the method provided above.
  • the initial homography matrices of H0, H1, and H2 can be obtained respectively.
  • the initial homography matrix of each region can still be obtained.
  • the first predicted value may be obtained by using the correlation method provided above, for example, by using the correlation method provided in step 701 shown in FIG. 7.
  • an iterative method is used to obtain the homography matrix at the current moment, for example, the above-mentioned H0, H1, and H2 are obtained.
  • the homography matrix at the current moment may be used to represent the above-mentioned first mapping relationship, that is, the mapping relationship between the image coordinate system and the vehicle body coordinate system at the current moment.
  • the homography matrix at the current moment can be used to transfer the first predicted value from the vehicle body coordinate system to the image coordinate system.
  • the method shown in FIG. 9 can be used to obtain, for example, three regions of homography matrices H0, H1, and H2, and then use the above three respectively.
  • a homography matrix transfers the predicted value of the lane line to the image coordinate system regionally.
  • the method for obtaining the homography matrix (or the mapping relationship at other times) at other moments can be the same as the method for obtaining the homography matrix at the current moment, which is equivalent to treating "other moments” as “current moments”. , Or it can be regarded as replacing the "current moment” in the above steps with “other moments”.
  • the second predicted value is used to indicate the corrected value of the first predicted value, that is, the second predicted value can be understood as the predicted value after the first predicted value is corrected.
  • the third predicted value may be adjusted according to the first detection information, so as to obtain the second predicted value.
  • the lane line corresponding to the predicted value of the lane line model (third predicted value) transferred from the first predicted value to the image coordinate system at the current moment and the original lane line detection information in the image coordinate system (first Detect the Mahalanobis distance between pixels of the lane line corresponding to the information), and use the predicted value of the lane line corresponding to the smallest value of Mahalanobis distance as the predicted value (second predicted value) of the corrected lane line model.
  • the method shown in FIG. 10 may be used to update the lane line model in the vehicle body coordinate system.
  • Fig. 10 is a schematic flowchart of a method for updating a lane line model in a vehicle body coordinate system according to an embodiment of the present application. The following describes the steps shown in FIG. 10.
  • Step 1001 is equivalent to calculating the projection of the point sequence describing the lane line in the image on the ground under the vehicle coordinate system.
  • the following formula may be used to obtain the coordinates corresponding to the point sequence in the image coordinate system in the vehicle body coordinate system.
  • the measurement includes at least one of the following parameters of the straight line segment: slope, intercept, or coordinates of the center point in the vehicle body coordinate system.
  • the point sequence of the lane line can be divided into a group of every three points, and multiple straight lines (or line segments) are fitted in the image coordinate system, that is, each group of points is fitted into a straight line and generated Multiple sets of measurement values of the multiple straight lines.
  • it may include the slope, the intercept, and the coordinate x w of the X axis of the center point (that is, the center point of each fitted line segment) in the vehicle body coordinate system.
  • Y means is used to represent a set of measurement quantities, and Y means can include the slope and intercept of the line determined by the set of points, and the projection of the center point (the above y 1 ) in the direction of the slope.
  • Y means [0] represents the 0th bit of the element Y means by Y means [1] represents a bit element Y means 1, and so forth.
  • Y means [0] and Y means [1] can be obtained by the following formula.
  • Updating the lane line model may include updating the slope, intercept and other parameters of the lane line model (lane line curve equation).
  • the predicted value and the detection information are used to obtain the real-time mapping relationship between the two coordinate systems, whereby, the influence of road surface changes can be eliminated, because these road surface changes will bring about changes in the mapping relationship between the two coordinate systems, and real-time acquisition of the mapping relationship is equivalent to real-time capture of the change in the mapping relationship.
  • using the predicted value of a more accurate lane line model to update the model of the lane line prediction algorithm in the car body coordinate system is equivalent to updating the model parameters of the lane line prediction algorithm in the car body coordinate system in real time, which can speed up the prediction algorithm.
  • the obtaining unit may obtain the first predicted value in the following ways.
  • the acquiring unit may be used to implement the function of the user interface 127 shown in FIG. 2 or to implement the function of the I/O interface 215 shown in FIG. 3 to perform the operation of acquiring an image.
  • the acquisition unit may also be used to implement part of the functions of the computer vision system 134 shown in FIG. 2 or to implement part of the functions of the processor 203 shown in FIG. 3 to perform processing on the acquired images to obtain The operation of the first detection information.
  • the acquiring unit may also directly acquire the first detection information, for example, acquiring the first detection information from a storage device.
  • the acquiring unit may be used to implement the user interface 127 shown in FIG. 2
  • the function may be used to implement the function of the I/O interface 215 shown in FIG. 3 to perform the operation of acquiring the first detection information.
  • the processor may have the function of the processor 151 shown in FIG. 2 or the function of the processor 203 shown in FIG. 3, or the function of the processor 330 shown in FIG. 4 to realize the above-mentioned function of executing related programs.
  • the foregoing processor may also be a general-purpose processor, a digital signal processing (digital signal processing, DSP), an application specific integrated circuit (ASIC), an off-the-shelf programmable gate array (field programmable gate array, FPGA) or other programmable Logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • DSP digital signal processing
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a mature storage medium in the field, such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers.
  • the storage medium is located in the memory, and the processor reads the information in the memory, and combines its hardware to complete the functions required by the units included in the lane line tracking device of the embodiment of the present application, or execute the lane line tracking of the embodiment of the present application. The steps of the method.
  • the communication interface may use a transceiver device such as but not limited to a transceiver to implement communication between the device and other devices or a communication network.
  • a transceiver device such as but not limited to a transceiver to implement communication between the device and other devices or a communication network.
  • the first detection information can be obtained through a communication interface.
  • the bus may include a path for transferring information between various components of the device (for example, a memory, a processor, and a communication interface).

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Pure & Applied Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Mathematical Optimization (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mechanical Engineering (AREA)
  • Transportation (AREA)
  • Human Computer Interaction (AREA)
  • Algebra (AREA)
  • Geometry (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Operations Research (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

本申请提供了一种车道线跟踪方法和装置,该方法包括:获取第一预测值,第一预测值用于表示车体坐标系中的车道线模型,是利用在先时刻的自车运动信息预测得到的;获取第一检测信息,第一检测信息包括当前时刻车道线在图像坐标系中的像素点;根据第一预测值和第一检测信息,确定第一映射关系,第一映射关系用于表示图像坐标系与车体坐标系之间的实时映射关系;根据第一映射关系确定第二预测值,第二预测值用于表示第一预测值的修正值。通过利用预测值和检测信息来获得实时的两个坐标系之间的映射关系,能够消除路面变化等的影响,从而提高车道线跟踪的准确性,且不需要基于平面假设和车道线平行假设,更具有普适性。

Description

车道线跟踪方法和装置 技术领域
本申请涉及人工智能领域,尤其涉及一种车道线跟踪方法和装置。
背景技术
车道线检测和车道线跟踪在车辆驾驶尤其是自动驾驶中得到广泛关注,且随着自动驾驶的普及,对车道线跟踪的稳定性和精度等要求也越来越高。
在现有车道线跟踪的方案中,通常需要假设路面是水平的且车道线之间是平行的。然而在城市道路中,由于道路结构的复杂性,路面不平是很普遍的现象,而且道路中的车道线也经常会出现不平行的情况,例如在红绿灯路口多出一个左转车道或者多出一个右转车道都是很常见的,又例如即使同一条道路在城市规划时也会根据实际情况增加或减少车道数量,这些都会使得有些车道线之间是不平行的,如果依然基于假设路面是水平的且车道线之间是平行的这样的前提去跟踪车道线就会导致较大的误差,简而言之,现有的车道线跟踪方法的准确性较低且缺乏普适性。
因此,如何提高车道线跟踪的准确性和普适性,是亟待解决的问题。
发明内容
本申请提供了一种车道线跟踪方法和装置,一方面能够有效提高车道线跟踪的准确性,另一方面该方法和装置具有很好的普适性。
第一方面,提供一种车道线跟踪的方法,该方法包括:获取第一预测值,该第一预测值用于表示车体坐标系中的车道线模型,且是利用在先时刻的自车运动信息预测得到的;获取第一检测信息,该第一检测信息包括当前时刻车道线在图像坐标系中的像素点;根据第一预测值和第一检测信息,确定第一映射关系,该第一映射关系用于表示图像坐标系与车体坐标系之间的实时映射关系;根据该第一映射关系确定第二预测值,该第二预测值用于表示第一预测值的修正值。
在本申请技术方案中,通过综合车道线在车体坐标系中的预测值和在图像坐标系中的检测信息,利用预测值和检测信息来获得实时的两个坐标系之间的映射关系,从而能够消除路面变化的影响,提高车道线跟踪的准确性,原因在于,这些路面变化会带来两个坐标系之间的映射关系的变化,而实时获取映射关系则相当于实时捕捉到映射关系的变化,进而能够有效消除路面不平对跟踪结果的影响,以及获得更为准确的车道线预测值。除上述以外,在本申请技术方案中,不需要再受车道线平行假设的局限,对于不平行的车道线同样适用,因此具有更好的通用性或者称之为普适性。
可选地,车道线的检测信息可以包括车道线的像素点信息,还可以包括车道线的线型。
需要说明的是,当前时刻是通过图像帧所对应的时刻来确定的,例如,假设将某一帧图像所对应的时刻作为当前时刻;则在先时刻是指图像帧中所对应的时刻早于该当前时刻的时刻,在先时刻可以包括上一时刻;在后时刻是指图像帧中所对应的时刻晚于该当前时 刻的时刻,在后时刻可以包括下一时刻。
可选地,可以实时获取检测信息,也可以从存储装置中获取上述检测信息。
还需说明的是,车辆的自车信息可以理解为用于描述车辆的状态、运动等的信息,自车信息可以包括车辆的自车运动信息,自车运动信息可以理解为动力学信息,例如可以包括车辆的车速或角速度(例如横摆角速度)等任意一种或多种动力学信息。
应理解,第一预测值是利用在先时刻的自车运动信息获取的,相当于在在先时刻时,利用车道线预测方法获得了车体坐标系中的车道线预测值,而在当前时刻获取到第一检测信息后,将第一预测值和第一检测信息进行综合考虑来确定第二预测值,利用当前时刻的车道线预测值和当前时刻的车道线检测信息来确定当前时刻的更为准确的预测值(或者可以理解为获得更为准确的车道线模型)。假设改变当前时刻,把原本的在先时刻作为当前时刻,把原本的当前时刻作为下一时刻,也就是相当于向前推进一个时刻,则相当于利用当前时刻的自车信息获得下一时刻的车道线预测值,然后利用下一时刻的图像坐标系中的车道线检测信息和下一时刻的车道线预测值来确定下一时刻的更为准确的预测值(或者可以理解为获得更为准确的车道线模型)。
还应理解,第一预测值可以是从例如存储装置中读取,也可以是利用自车运动信息等进行运算得到,它可以是在在先时刻产生,也可以是在当前时刻产生,也就是说,可以是在当前时刻之前得到了在先时刻的自车运动信息等信息后,进行预测得到了第一预测值;也可以是在先时刻的时候只存储了自车运动信息,不进行运算,等当前时刻的时候再进行运算,等等,在此不再一一列举。
还应理解,车道线模型可以理解为车道线的数学模型,或者可以理解为表示车道线的方式,例如可以利用曲线方程或多项式来表示车道线,所以也可以称之为车道线方程、车道线曲线方程、车道线多项式等。
可选地,车道线模型可以用车道线的位置(截距)、角度、曲率、曲率的变化率、曲率半径等参数来表示。
可选地,车道线模型还可以用车道宽度、车中心偏离车道中心位置、车道线角度、曲率等参数来表示。应理解,还可以有其他的表示车道线的方式,只要能够表示出车道线的相对位置和变化趋势即可,不存在限定。
可选地,可以利用车道线预测方法或称之为车道线预测算法,根据当前时刻和在先时刻之间的时间间隔、在先时刻的自车信息,来得到当前时刻车道线在车体坐标系中的预测值,即第一预测值。
可选地,车道线预测方法例如可以采用IMM算法或者称之为IMM预测方法获得上述车道线的预测值。
需要说明的是,车道线预测算法也可以看作是利用车道线预测算法的模型对一些输入数据进行处理,来获得车道线模型的预测值,例如IMM算法的模型就可以称之为IMM预测模型、IMM算法模型等。可以看出,在本申请实施例中,车道线预测算法的模型和车道线模型是不同的概念,车道线预测算法的模型是用于获得车道线模型的预测算法的模型,而车道线模型则是指车道线在车体坐标系中的曲线方程或者称之为数学表达式,所以车道线模型的预测值可以理解为,利用车道线预测算法的模型获得的车道线模型的预测值。
结合第一方面,在第一方面的某些实现方式中,第一预测值可以是利用车道线预测算法的模型获得的。可选地,可以利用第二预测值,更新当前时刻的车道线预测算法(例如IMM算法)的模型。利用更为准确的车道线模型的预测值(第二预测值)来更新车体坐标系的车道线预测算法的模型,相当于实时更新车体坐标系中的车道线预测算法的模型参数,能够加速预测算法(例如IMM预测算法)的收敛,以及能够提高利用预测算法的模型预测到的车道线模型的准确性。
可选地,可以利用EKF滤波器对车体坐标系下的车道线模型进行滤波处理,以提高车道线模型的准确性。
可选地,还可以根据第二预测值更新车道线预测算法的模型,当该模型用于后续预测过程时,可以获得更为准确的车道线模型的预测值。结合第一方面,在第一方面的某些实现方式中,第一预测值是在当时间间隔在预设范围内的时候获得的,也就是说先根据时间间隔判断是否进行车道线预测,当时间间隔再预设范围内的时候才预测。在上述过程中,通过判断在先时刻和当前时刻的时间间隔是否在预设范围内,从而决定是否进行预测。原因在于,当时间间隔过大的时候,对应于实际相当于车辆可能已经行驶出较远的距离,车道线可能已经有了很大变化,还可能已经不是在一条道路上行驶,此时,这种预测很可能导致误差较大等情况。而当时间间隔过小的时候,对应于实际相当于车辆可能几乎没往前移动,此时,这种预测虽然依旧会准确,但由于变化很小可能参数几乎不变,所以这种太过频繁的预测,反而会在一定程度上造成资源的浪费。因此,只有当时间间隔在预设范围内时才进行预测,能够在获得较为准确的车道线预测值的同时,减少占用资源。
可选地,可以根据输入图像的时间戳所确定的时间间隔,判断是否需要预测,也就是根据输入图像的前后两帧图像所对应的时刻所确定的时间间隔,判断是否需要改变滤波器的状态值,从而获得车道线预测值。滤波器的状态值也可以理解为滤波器的系数,或者滤波器的状态、参数等。该时间戳可以理解为图像帧所对应的时刻。
需要说明的是,假设将两帧图像中时间戳较晚的时刻作为当前时刻,则时间戳较早的时刻可以作为在先时刻;还可以将时间戳较晚的时刻作为在后时刻,时间戳较早的时刻作为当前时刻。也就是说,两帧图像可以是连续的,也可以不是连续的。当两帧图像是连续的时候,可以认为两帧图像分别是上一时刻和当前时刻的图像,还可以认为两帧图像分别是当前时刻和下一时刻的图像。
结合第一方面,在第一方面的某些实现方式中,可以利用单应性矩阵来表示车体坐标系和图像坐标系之间的映射关系。
可选地,可以划分区域,并计算不同区域的单应性矩阵,例如可以将车辆的前方按距离划分为多个区域,获取对应于多个区域的多个单应性矩阵,其中,多个区域中的每个区域均分别对应至少一个单应性矩阵。当获取实时的映射关系时,可以通过获取实时的至少一个单应性矩阵来实现,也就是说,实时计算不同区域的单应性矩阵,并利用实时的单应性矩阵来表示实时映射关系(例如上文所述第一映射关系)。
划分成多个区域能够更为充分地反映路面的实际情况,从而进一步提高映射关系的准确性,进而提高车道线跟踪的准确性,换而言之,分区域设置单应性矩阵能够得到更精确的两个坐标系之间的映射关系,从而提高车道线跟踪的准确性。
可选地,可以先获得初始单应性矩阵,再根据第一预测值和初始单应性矩阵,获得对 应于第一预测值的第四预测值,第四预测值可以理解为用于表示第一预测值在初始映射关系下的对应值,或者可以理解为第一预测值在初始图像坐标系中的对应值。由于初始单应性矩阵可以看作是表示车体坐标系平面和初始图像平面(初始图像坐标系)之间的映射关系,因此,第四预测值相当于在初始图像平面中第一预测值的对应值。还可以看作是将车体坐标系下的车道线转移到初始图像平面内。之后,利用第一检测信息和第四预测值来确定当前时刻的单应性矩阵。也就是说,当路面坡度发生变化的时候,车体坐标系和图像坐标系的初始映射关系已经发生变化,此时,第四预测值和当前时刻的图像坐标系中的第一检测信息之间就会出现偏差,则可以通过例如最小化二者的差值等方式来获得当前时刻更为准确的单应性矩阵(也就是实时的单应性矩阵),从而获得了当前时刻的映射关系(也就是实时映射关系)。
需要说明的是,实时映射关系可以理解为实时获取的映射关系,也就是说,可以理解为随着时间的推进,不断获得的映射关系,或者可以理解为对应于不同时刻的映射关系。
可选地,在获得当前时刻的单应性矩阵后,可以利用当前时刻的单应性矩阵将第一预测值从车体坐标系转移至图像坐标系中。当包括多个单应性矩阵,例如通过区域划分出多个单应性矩阵时,可以利用上述相同的方法获得多个区域中每个区域的单应性矩阵,再分别利用上述多个单应性矩阵分区域地将车道线的预测值转移到图像坐标系中。
需要说明的是,由于车辆行驶过程中,路面的变化会反映为图像所确定的平面的变化、车体坐标系和图像坐标系之间的映射关系存在变化,而初始单应性矩阵相当于确定了车体坐标系和图像坐标系之间的初始映射关系,当前时刻的单应性矩阵则相当于确定了车体坐标系和图像坐标系之间的实时映射关系。因此,在上述方法中,利用车道线模型的预测值在初始图像坐标系中的对应值和当前时刻图像坐标系中的对应值的差值,来构造和最小化损失函数,以及通过迭代的方式获得当前时刻的车体坐标系和图像坐标系之间的映射关系,也就是当前时刻的单应性矩阵。也可以理解为,在上述方法中,根据车道线模型的预测值在初始映射关系下的对应值和在当前时刻的映射关系下的对应值之间的差值来构造和最小化损失函数,以及通过迭代的方式获得发当前时刻的映射关系。但应理解,其它时刻的单应性矩阵(或其它时刻的映射关系)的获取方法可以与当前时刻的单应性矩阵的获取方法相同,相当于将“其它时刻”看作是“当前时刻”,或者可以看作是将上述步骤中的“当前时刻”替换为“其它时刻”即可。
结合第一方面,在第一方面的某些实现方式中,在根据第一映射关系、第一预测值和第一检测信息,确定当前时刻的车道线模型的第二预测值时,可以先根据第映射关系和第一预测值,获得第三预测值,该第三预测值用于表示第一预测值在第一映射关系下的对应值。可选地,可以利用第一映射关系(例如利用实时的单应性矩阵),将第一预测值转移到当前时刻的图像坐标系,获得对应于第一预测值的第三预测值,该第三预测值也就是根据第一预测值和第一映射关系(例如利用实时的单应性矩阵)确定的在当前时刻的图像坐标系中第一预测值的对应值。
可选地,获得第三预测值之后,可以根据第一检测信息对第三预测值进行调整,从而获得第二预测值。
可选地,可以计算第一预测值转移到当前时刻图像坐标系中的车道线模型的预测值(第三预测值)所对应的车道线和图像坐标系中的原车道线检测信息(第一检测信息)所 对应的车道线像素点之间的马氏距离,并将马氏距离最小的值所对应的车道线的预测值作为车道线模型的第二预测值,也就是修正之后的预测值。
需要说明的是,此处相当于将至少一条车体坐标系中的车道线与图像坐标中的各车道线进行对应,从而将马氏距离最小的线的信息作为此次的测量量。举例说明,假设在车体坐标系中可以获取2条车道线,分别为左车道线和右车道线,而在图像坐标系中存在3条车道线,则能够根据上述方法将3条车道线中的2条车道线分别对应到上述左车道线和右车道线。
可选地,还可以根据第二预测值,更新车体坐标系中的车道线模型(车道线方程)。更新车道线模型可以包括更新车道线模型(车道线曲线方程)的斜率、截距等参数。
结合第一方面,在第一方面的某些实现方式中,将第二预测值用于进行规划路径,包括:获取路径规划信息,根据第二预测值和路径规划信息,生成下一时段或下一时刻的路径规划方案。路径规划信息可以包括以下至少一种:道路信息、交通信息或自车信息;其中,道路信息可以包括以下至少一种:路障信息、道路的宽度信息或道路的长度信息;交通信息可以包括以下至少一种:红绿灯信息、交通规则信息、周围其他车辆的行驶信息或路况信息;自车信息可以包括以下至少一种:自车运动信息、位置信息、形态信息、结构信息,自车运动信息可以包括车辆的角速度、速度等,位置信息可以理解为车辆当前的位置,形态信息可以理解为车辆的形状、造型、尺寸等,结构信息可以理解为车辆的各个组成部分,例如可以分为车头、车身等。
可选地,还可以获取当前时刻的可行驶区域信息,从而根据第二预测值、路径规划信息和可行驶区域信息,确定下一时段或下一时刻的车道级的路径规划方案。
在该路径规划的方法中,利用第一方面所提供的车道线跟踪方法,能够获得更为准确的车道线模型的预测值,从而能够提高所规划路径的准确性。
结合第一方面,在第一方面的某些实现方式中,将第二预测值用于预警策略规划,包括:获取预警信息,根据第二预测值、道路信息和预设预警阈值,生成预警信号;根据预警信号生成预警策略规划信息,该预警策略规划信息用于表示对所述预警信号的响应策略;预警信息可以包括以下至少一种:车辆的位置信息、交通信息、路障信息。
在该预警策略规划的方法中,利用第一方面所提供的车道线跟踪方法,能够获得更为准确的车道线模型的预测值,从而能够提高预警的准确性。
第二方面,提供一种车道线跟踪的装置,该装置包括用于执行上述第一方面的任意一种实现方式的方法的单元。
第三方面,提供一种芯片,所述芯片包括处理器与数据接口,所述处理器通过所述数据接口读取存储器上存储的指令,执行上述第一方面的任意一种实现方式中的方法。
可选地,作为一种实现方式,所述芯片还可以包括存储器,所述存储器中存储有指令,所述处理器用于执行所述存储器上存储的指令,当所述指令被执行时,所述处理器用于执行第一方面的任意一种实现方式中的方法。
第四方面,提供一种计算机可读介质,该计算机可读介质存储用于设备执行的程序代码,该程序代码包括用于执行第一方面的任意一种实现方式中的方法。
第五方面,提供一种包含指令的计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述第一方面的任意一种实现方式中的方法。
附图说明
图1是一种建立车体坐标系的方法示意图。
图2是本申请实施例适用的车辆的功能框图。
图3是本申请实施例的自动驾驶系统的示意图。
图4是本申请实施例的一种云侧指令自动驾驶车辆的应用示意图。
图5是本申请实施例的车道线检测和跟踪装置的示意图。
图6是本申请实施例的车道线检测方法的示意性流程图。
图7是本申请实施例的车道线跟踪方法的示意性流程图。
图8是本申请实施例的利用IMM预测车道线的示意性流程图。
图9是本申请实施例的单应性矩阵的计算流程示意图。
图10是本申请实施例的更新车体坐标系中车道线模型的方法的示意性流程图。
具体实施方式
为了便于理解,首先对本申请实施例所涉及的一些技术术语进行介绍。
1、车道线检测(lane detection,LD)
车道线检测是指获知图像内的车道线的相对位置的方法,也可以理解为获知图像内的车道线的坐标(像素点)。例如,可以在一幅图像内,经过检测算法处理后,获得车道线在图像内的位置,或者称之为获得车道线的像素点。可以通过神经网络(neural network)或者传统的算法,从图像平面内检出车道线。在本申请实施例中,重点以利用神经网络获取图像平面内车道线的像素点为例进行介绍。
2、车道线跟踪(lane tracking,LT)
车道线跟踪是一种对获得的车道线进行处理的方法,能够减少漏检、误检等带来的影响。
车道线检测相当于检测出每一帧图像中的车道线,则在车辆不断往前行驶的时候,就会不断生成相应的检测结果,而在实际情况中可能会出现某一帧或多帧图像没有检测到或者检测出现错误(例如把其他线条检测成了车道线)的情况,则可能使得前面检测到的车道线和后面检测到的车道线不能对应或者对应错误,而车道线跟踪就可以用来纠正这些错误。车道线跟踪的原理就是,将车道线抽象成数学模型或者称之为几何模型(例如下文所述车道线模型),通过利用真实的历史数据(例如下文中的在先时刻的自车运动信息)得到车道线模型的预测值(例如下文的第一预测值),再将其通过一定的匹配规则(例如下文的图像坐标系和车体坐标系的映射关系)与当前时刻图像中检测到的车道线进行匹配,能够满足匹配规则的就认为是同一条车道线。简而言之,车道线跟踪可以看作是不断获得车道线模型的过程,获得的车道线模型能够用于修正车道线检测出现的错误。
在车道线跟踪方法中,目前常采用下面两种方法来对车道线模型进行输出。
(1)把拍摄的原图像转换为鸟瞰(bird-eye)图的形式,在鸟瞰图内,对像素点做三次或者两次曲线拟合,最后把拟合的车道线转换到原图中再输出。
(2)输出在车体坐标系中的车道线。图1是一种建立车体坐标系的示意图,如图1所示,在车体中建立平面直角坐标系,其中X轴指向车的正前方,Y轴由副驾驶指向正驾 驶。X轴和Y轴的指向也可以与图示方向相反,例如Y轴由正驾驶指向副驾驶等。车体坐标系中的两条曲线为车道线,每条车道线均可以用三次方程来描述,曲线方程可以为:
Figure PCTCN2020087506-appb-000001
其中,a表示车道线模型的曲率变化,b表示车道线模型的曲率,c表示车道线模型的斜率,d表示车道线模型的截距。需要说明的是,当用上述曲线方程来描述车道线时,既可以描述直线也可以描述曲线,曲率b的值为0时即表示该车道线为直线,因此直线可以看作是上述曲线的一个特例。
车道线的曲线方程也可以称之为车道线方程,可以看作是本申请实施例所述车道线模型的一例。也就是说,在这种车道线跟踪方法中,通过获取车体坐标系中的车道线方程来实现车道线跟踪。上述两种车道线跟踪的结果输出方式均可以看作是将检测到的车道线和实际中的车道线能够实时地、不断地对应的过程。或者可以理解为,车道线跟踪就是不断获得车道线模型的过程。
在本申请实施例的方案中,车道线跟踪的方法、装置输出的是车体坐标系中的车道线模型。
3、图像坐标系和车体坐标系
图像坐标系用于表示车道线在图像中的相对位置,也就是说,利用车道线在图像坐标系中的坐标来表示车道线在图像中的相对位置。因此,图像坐标系可以理解为利用图像所在平面建立的用于表示车道线的相对位置的坐标系。
车体坐标系用于表示车道线相对于车的位置,也就是说,利用车道线在车体坐标系中的坐标来表示车道线相对于车的位置。因此,车体坐标系可以理解为利用车体自身建立的用于表示车道线的相对位置的坐标系,例如可以采用上面所述方法建立车体坐标系。
4、单应性矩阵(homography matrix,HM)
单应性矩阵是射影几何或者称之为投影几何中的数学概念,用于表述真实世界中一个平面与对应它图像的透视变换,以及通过透视变换来实现图像从一种视图变换到另外一种视图。因此也可以理解为,单应性矩阵可以用于实现不同平面之间的转换。
在计算机视觉领域,平面的单应性被定义为一个平面到另外一个平面的投影映射。因此一个二维平面上的点映射到摄像机成像仪上的映射就是平面单应性的例子。如果使用齐次坐标将标定板上一点映射到成像仪上的一个点,这种映射可以用单应性矩阵来表示。
单应性矩阵的自由度为8,也就是说,如果想获得唯一解,需要4个点对(对应8个方程)求解单应性矩阵。
在本申请实施例中,单应性矩阵用于表示车体坐标系与图像坐标系之间的映射矩阵,也可以理解为,用于表示车体坐标系所确定的平面与图像坐标系所确定的平面之间的映射矩阵。
由于路况原因,会出现路面坡度不断变化的情况,反映到车道线检测和跟踪中,相当于车体坐标系和图像坐标系所确定的平面不断变化,且各自变化的情况存在差异。因此,,而且坡度可能存在变化。因此,在本申请实施例中,利用单应性矩阵来表示车体坐标系和图像坐标系之间的映射关系,并且,通过实时获取(更新)单应性矩阵的方式,来实时获取(更新、捕捉)车体坐标系和图像坐标系之间的映射关系的变化情况。
5、交互式滤波(interacting multiple model,IMM)
在交互式滤波中,假设目标有多种运动状态,每种运动状态对应一种模型,目标在任意时刻的运动状态都可以用给定的模型的一种来表示,而目标的滤波结果是多个滤波模型结果的综合,例如对多个滤波模型结果进行加权综合。由于目标的机动可以假定为目标在不同的阶段适用不同的运动模型,因此IMM算法可以用于解决目标跟踪中的问题。在本申请实施例中,利用IMM算法进行车道线的跟踪预测,也就是说,在本申请实施例中,车道线即相当于上述“目标”。
6、扩展卡尔曼滤波(extended Kalman filter,EKF)
卡尔曼滤波(Kalman filter,KF)是一种利用线性系统状态方程,通过系统输入输出观测数据,对系统状态进行最优估计的算法,由于观测数据中包括系统中的噪声和干扰的影响,所以最优估计也可看作是滤波过程。而EKF则是在KF的基础上出现的可以用于非线性系统的滤波方法,EKF主要是将非线性系统线性化,然后再进行卡尔曼滤波。在本申请实施例中,将EKF用于对车体坐标系中的车道线进行滤波以获得车道线模型。
本申请实施例所提供的车道线跟踪方法和/或装置可以应用于车辆等交通工具,此外,这些方法和/或装置既可以应用于人工驾驶,又可以应用于辅助驾驶,还可以应用于自动驾驶。下面介绍几个可能的应用场景。
应用场景一、路径规划/行驶规划
本申请实施例提供的车道线跟踪方法可以应用于路径规划/行驶规划等场景,例如,可以将车道线跟踪方法获取的车道线跟踪结果发送给路径规划模块或装置,使得路径规划模块或装置能够根据车道线跟踪结果(例如下文所述的车道线的第二预测值或者下文所述车体坐标系中的车道线模型等)规划路径,或对已有的路径规划方案进行调整。又例如,可以将车道线跟踪方法获取的车道线跟踪结果发送给行驶规划模块或装置,使得行驶规划模块或装置能够根据车道线跟踪结果来确定车辆的可行驶区域(freespace),或者还能够根据车道线跟踪结果来指导接下来的车辆行为,例如可以是生成自动驾驶中的执行动作,也可以是指导驾驶员进行驾驶。
可行驶区域也可以称之为可通行空间,是一种描述车辆周边环境的表述方式。例如,某一车辆的可通行空间一般包含其他车辆,行人和马路边等信息,因此,该车辆的可通行空间主要用于将该车辆附近的可以自由行使的空间描述清楚。
应用场景二、导航系统
本是申请实施例提供的车道线跟踪方法和/或装置还可以应用于导航系统,车辆获取车道线跟踪结果(例如下文所述的车道线的第二预测值或者下文所述车体坐标系中的车道线模型等)后,可以将该结果上报给导航控制模块或装置,该导航控制模块或装置可以根据接收的车道线跟踪结果以及结合路况等其他信息,来指示驾驶员接下来的驾驶,和/或可以指示自动驾驶车辆生成相应的执行动作,该导航控制模块或装置可以是云端设备、服务器、终端设备等数据处理设备,导航控制模块,还可以是设置在车辆上的车载模块或装置。
进一步举例说明,例如可以结合车道线跟踪结果和交通信息(例如红绿灯信息),来指导车辆根据交通规则行驶。尤其是当应用于自动驾驶的时候,能够控制自动驾驶车辆根据车道线跟踪结果行驶到正确的车道,以及在正确的时间通过红路灯路口。
又例如,可以结合车道线跟踪结果和路况信息、路径信息等,来为车辆规划车道级的导航信息。
应用场景三、预警策略规划
本申请实施例提供的车道线跟踪方法和/或装置还可以用于进行预警策略规划,例如,可以根据车道线跟踪结果和车辆当前位置进行车道偏离预警,当根据车道线跟踪结果和车辆当前位置判定车辆偏离车道时,发出警示信号。进一步举例说明,当车辆靠近实线型车道线或者压到实线型车道线的时候,利用触发指示灯或发出声音等形式进行警示。还可以将报警信号发送给其他决策或控制模块来进一步控制车辆行驶。
又例如,可以根据车道线跟踪结果和车辆周围情况,进行碰撞预警,可以根据车道线跟踪结果和车辆存在的其他车辆、行人、障碍物、路沿、建筑物、隔离带等信息进行碰撞预警的策略规划。
下面结合附图,对本申请实施例的技术方案进行介绍。
图2是本申请实施例适用的车辆的功能框图。其中,车辆100可以是人工驾驶车辆,或者可以将车辆100配置可以为完全或部分地自动驾驶模式。
在一个示例中,车辆100可以在处于自动驾驶模式中的同时控制自车,并且可通过人为操作来确定车辆及其周边环境的当前状态,确定周边环境中的至少一个其他车辆的可能行为,并确定其他车辆执行可能行为的可能性相对应的置信水平,基于所确定的信息来控制车辆100。在车辆100处于自动驾驶模式中时,可以将车辆100置为在没有和人交互的情况下操作。
车辆100中可以包括各种子系统,例如,行进系统110、传感系统120、控制系统130、一个或多个外围设备140以及电源160、计算机系统150和用户接口170。
可选地,车辆100可以包括更多或更少的子系统,并且每个子系统可包括多个元件。另外,车辆100的每个子系统和元件可以通过有线或者无线互连。
示例性地,行进系统110可以包括用于向车辆100提供动力运动的组件。在一个实施例中,行进系统110可以包括引擎111、传动装置112、能量源113和车轮114/轮胎。其中,引擎111可以是内燃引擎、电动机、空气压缩引擎或其他类型的引擎组合;例如,汽油发动机和电动机组成的混动引擎,内燃引擎和空气压缩引擎组成的混动引擎。引擎111可以将能量源113转换成机械能量。
示例性地,能量源113可以包括汽油、柴油、其他基于石油的燃料、丙烷、其他基于压缩气体的燃料、乙醇、太阳能电池板、电池和其他电力来源。能量源113也可以为车辆100的其他系统提供能量。
示例性地,传动装置112可以包括变速箱、差速器和驱动轴;其中,传动装置112可以将来自引擎111的机械动力传送到车轮114。
在一个实施例中,传动装置112还可以包括其他器件,比如离合器。其中,驱动轴可以包括可耦合到一个或多个车轮114的一个或多个轴。
示例性地,传感系统120可以包括感测关于车辆100周边的环境的信息的若干个传感器。
例如,传感系统120可以包括定位系统121(例如,全球定位系统(global positioning system,GPS)、北斗系统或者其他定位系统)、惯性测量单元(inertial measurement unit, IMU)122、雷达123、激光测距仪124以及相机125。传感系统120还可以包括被监视车辆100的内部系统的传感器(例如,车内空气质量监测器、燃油量表、机油温度表等)。来自这些传感器中的一个或多个的传感器数据可用于检测对象及其相应特性(位置、形状、方向、速度等)。这种检测和识别是自主车辆100的安全操作的关键功能。
其中,定位系统121可以用于估计车辆100的地理位置。IMU122可以用于基于惯性加速度来感测车辆100的位置和朝向变化。在一个实施例中,IMU 122可以是加速度计和陀螺仪的组合。
示例性地,雷达123可以利用无线电信号来感测车辆100的周边环境内的物体。在一些实施例中,除了感测物体以外,雷达123还可用于感测物体的速度和/或前进方向。
示例性地,激光测距仪124可以利用激光来感测车辆100所位于的环境中的物体。在一些实施例中,激光测距仪124可以包括一个或多个激光源、激光扫描器以及一个或多个检测器,以及其他系统组件。
示例性地,相机125可以用于捕捉车辆100的周边环境的多个图像。例如,相机125可以是静态相机或视频相机。
如图2所示,控制系统130为控制车辆100及其组件的操作。控制系统130可以包括各种元件,比如可以包括转向系统131、油门132、制动单元133、计算机视觉系统134、路线控制系统135以及障碍规避系统136。
示例性地,转向系统131可以操作来调整车辆100的前进方向。例如,在一个实施例中可以为方向盘系统。油门132可以用于控制引擎111的操作速度并进而控制车辆100的速度。
示例性地,制动单元133可以用于控制车辆100减速;制动单元133可以使用摩擦力来减慢车轮114。在其他实施例中,制动单元133可以将车轮114的动能转换为电流。制动单元133也可以采取其他形式来减慢车轮114转速从而控制车辆100的速度。
如图2所示,计算机视觉系统134可以操作来处理和分析由相机125捕捉的图像以便识别车辆100周边环境中的物体和/或特征。上述物体和/或特征可以包括交通信号、道路边界和障碍物。计算机视觉系统134可以使用物体识别算法、运动中恢复结构(structure from motion,SFM)算法、视频跟踪和其他计算机视觉技术。在一些实施例中,计算机视觉系统134可以用于为环境绘制地图、跟踪物体、估计物体的速度等等。
示例性地,路线控制系统135可以用于确定车辆100的行驶路线。在一些实施例中,路线控制系统135可结合来自传感器、GPS和一个或多个预定地图的数据以为车辆100确定行驶路线。
如图2所示,障碍规避系统136可以用于识别、评估和避免或者以其他方式越过车辆100的环境中的潜在障碍物。
在一个实例中,控制系统130可以增加或替换地包括除了所示出和描述的那些以外的组件。或者也可以减少一部分上述示出的组件。例如可以包括路径规划模块,用于规划车辆行驶路径,进一步,该路径规划既可以是道路级的路径规划也可以是车道级的路径规划;又例如可以包括行使规划模块,用于确定可行驶区域或者用于指导车辆根据交通规则行驶;又例如可以包括导航控制模块,用于来指示驾驶员接下来的驾驶,和/或可以指示自动驾驶车辆生成相应的执行动作等;又例如可以包括预警策略规划模块,用于生成预警策 略规划的报警信号,从而避免违反交通规则等安全隐患。
如图2所示,车辆100可以通过外围设备140与外部传感器、其他车辆、其他计算机系统或用户之间进行交互;其中,外围设备140可包括无线通信系统141、车载电脑142、麦克风143和/或扬声器144。
在一些实施例中,外围设备140可以提供车辆100与用户接口170交互的手段。例如,车载电脑142可以向车辆100的用户提供信息。用户接口116还可操作车载电脑142来接收用户的输入;车载电脑142可以通过触摸屏进行操作。在其他情况中,外围设备140可以提供用于车辆100与位于车内的其它设备通信的手段。例如,麦克风143可以从车辆100的用户接收音频(例如,语音命令或其他音频输入)。类似地,扬声器144可以向车辆100的用户输出音频。
如图2所述,无线通信系统141可以直接地或者经由通信网络来与一个或多个设备无线通信。例如,无线通信系统141可以使用3G蜂窝通信;例如,码分多址(code division multiple access,CDMA))、EVD0、全球移动通信系统(global system for mobile communications,GSM)/通用分组无线服务(general packet radio service,GPRS),或者4G蜂窝通信,例如长期演进(long term evolution,LTE);或者,5G蜂窝通信。无线通信系统141可以利用无线上网(WiFi)与无线局域网(wireless local area network,WLAN)通信。
在一些实施例中,无线通信系统141可以利用红外链路、蓝牙或者紫蜂协议(ZigBee)与设备直接通信;其他无线协议,例如各种车辆通信系统,例如,无线通信系统141可以包括一个或多个专用短程通信(dedicated short range communications,DSRC)设备,这些设备可包括车辆和/或路边台站之间的公共和/或私有数据通信。
如图2所示,电源160可以向车辆100的各种组件提供电力。在一个实施例中,电源160可以为可再充电锂离子电池或铅酸电池。这种电池的一个或多个电池组可被配置为电源为车辆100的各种组件提供电力。在一些实施例中,电源160和能量源113可一起实现,例如一些全电动车中那样。
示例性地,车辆100的部分或所有功能可以受计算机系统150控制,其中,计算机系统150可以包括至少一个处理器151,处理器151执行存储在例如存储器152中的非暂态计算机可读介质中的指令153。计算机系统150还可以是采用分布式方式控制车辆100的个体组件或子系统的多个计算设备。
例如,处理器151可以是任何常规的处理器,诸如商业可获得的中央处理器(central processing unit,CPU)。
可选地,该处理器可以是诸如专用集成电路(application specific integrated circuit,ASIC)或其它基于硬件的处理器的专用设备。尽管图2功能性地图示了处理器、存储器、和在相同块中的计算机的其它元件,但是本领域的普通技术人员应该理解该处理器、计算机、或存储器实际上可以包括可以或者可以不存储在相同的物理外壳内的多个处理器、计算机或存储器。例如,存储器可以是硬盘驱动器或位于不同于计算机的外壳内的其它存储介质。因此,对处理器或计算机的引用将被理解为包括对可以或者可以不并行操作的处理器或计算机或存储器的集合的引用。不同于使用单一的处理器来执行此处所描述的步骤,诸如转向组件和减速组件的一些组件每个都可以具有其自己的处理器,所述处理器只执行 与特定于组件的功能相关的计算。
在此处所描述的各个方面中,处理器可以位于远离该车辆并且与该车辆进行无线通信。在其它方面中,此处所描述的过程中的一些在布置于车辆内的处理器上执行而其它则由远程处理器执行,包括采取执行单一操纵的必要步骤。
在一些实施例中,存储器152可包含指令153(例如,程序逻辑),指令153可以被处理器151执行来执行车辆100的各种功能,包括以上描述的那些功能。存储器152也可包含额外的指令,比如包括向行进系统110、传感系统120、控制系统130和外围设备140中的一个或多个发送数据、从其接收数据、与其交互和/或对其进行控制的指令。
示例性地,除了指令153以外,存储器152还可存储数据,例如,道路地图、路线信息,车辆的位置、方向、速度以及其它这样的车辆数据,以及其他信息。这种信息可在车辆100在自主、半自主和/或手动模式中操作期间被车辆100和计算机系统150使用。
如图2所示,用户接口170可以用于向车辆100的用户提供信息或从其接收信息。可选地,用户接口170可以包括在外围设备140的集合内的一个或多个输入/输出设备,例如,无线通信系统141、车载电脑142、麦克风143和扬声器144。
在本申请的实施例中,计算机系统150可以基于从各种子系统(例如,行进系统110、传感系统120和控制系统130)以及从用户接口170接收的输入来控制车辆100的功能。例如,计算机系统150可以利用来自控制系统130的输入以便控制制动单元133来避免由传感系统120和障碍规避系统136检测到的障碍物。在一些实施例中,计算机系统150可操作来对车辆100及其子系统的许多方面提供控制。
可选地,上述这些组件中的一个或多个可与车辆100分开安装或关联。例如,存储器152可以部分或完全地与车辆100分开存在。上述组件可以按有线和/或无线方式来通信地耦合在一起。
可选地,上述组件只是一个示例,实际应用中,上述各个模块中的组件有可能根据实际需要增添或者删除,图2不应理解为对本申请实施例的限制。
可选地,车辆100可以是在道路行进的自动驾驶汽车,可以识别其周围环境内的物体以确定对当前速度的调整。物体可以是其它车辆、交通控制设备、或者其它类型的物体。在一些示例中,可以独立地考虑每个识别的物体,并且基于物体的各自的特性,诸如它的当前速度、加速度、与车辆的间距等,可以用来确定自动驾驶汽车所要调整的速度。
可选地,车辆100或者与车辆100相关联的计算设备(如图2的计算机系统150、计算机视觉系统134、存储器152)可以基于所识别的物体的特性和周围环境的状态(例如,交通、雨、道路上的冰等等)来预测所述识别的物体的行为。
可选地,每一个所识别的物体都依赖于彼此的行为,因此,还可以将所识别的所有物体全部一起考虑来预测单个识别的物体的行为。车辆100能够基于预测的所述识别的物体的行为来调整它的速度。换句话说,自动驾驶汽车能够基于所预测的物体的行为来确定车辆将需要调整到(例如,加速、减速、或者停止)稳定状态。在这个过程中,也可以考虑其它因素来确定车辆100的速度,诸如,车辆100在行驶的道路中的横向位置、道路的曲率、静态和动态物体的接近度等等。
除了提供调整自动驾驶汽车的速度的指令之外,计算设备还可以提供修改车辆100的转向角的指令,以使得自动驾驶汽车遵循给定的轨迹和/或维持与自动驾驶汽车附近的物 体(例如,道路上的相邻车道中的轿车)的安全横向和纵向距离。
上述车辆100可以为轿车、卡车、摩托车、公共汽车、船、飞机、直升飞机、割草机、娱乐车、游乐场车辆、施工设备、电车、高尔夫球车、火车、和手推车等,本申请实施例不做特别的限定。
在上述各应用场景中,可以利用传感系统120获取车辆所在道路的图像,该图像中包括该道路的成像,具体地,可以利用相机125获取车辆所在道路的图像。传感系统120将获取的图像发送给控制系统130,使得控制系统130可以对图像进行车道线检测和跟踪等处理,具体地,可以利用计算机视觉系统134执行上述过程,以获得车道线跟踪结果。
接下来例如可以将车道线跟踪结果传送给路线控制系统135,由路线控制系统135生成相应的指令,并通过行进系统110或控制系统130中的转向系统131、油门132、制动单元133等来执行相应指令,从而控制车辆接下来的行为。又例如,还可以将跟踪结果传送给控制系统130的其他模块,例如路径规划模块、行使规划模块、导航控制模块或预警策略规划模块等任意一种或多种模块(在图中均未示出),使得上述各模块能够根据车道线跟踪结果以及结合其他的信息来实现相应的功能,例如生成规划路径、确定可行驶区域、生成导航指令或者生成预警信号等。
在一种可能的实现方式中,上述图2所示的车辆100可以是自动驾驶车辆,下面对自动驾驶系统的进行详细描述。
图3是本申请实施例的自动驾驶系统的示意图。
如图3所示的自动驾驶系统包括计算机系统201,其中,计算机系统201包括处理器203,处理器203和系统总线205耦合。处理器203可以是一个或者多个处理器,其中,每个处理器都可以包括一个或多个处理器核。显示适配器207(video adapter),显示适配器可以驱动显示器209,显示器209和系统总线205耦合。系统总线205可以通过总线桥211和输入输出(I/O)总线213耦合,I/O接口215和I/O总线耦合。I/O接口215和多种I/O设备进行通信,比如,输入设备217(如:键盘,鼠标,触摸屏等),媒体盘221(media tray),(例如,CD-ROM,多媒体接口等)。收发器223可以发送和/或接受无线电通信信号,摄像头255可以捕捉景田和动态数字视频图像。其中,和I/O接口215相连接的接口可以是USB端口225。
其中,处理器203可以是任何传统处理器,比如,精简指令集计算(reduced instruction set computer,RISC)处理器、复杂指令集计算(complex instruction set computer,CISC)处理器或上述的组合。
可选地,处理器203可以是诸如专用集成电路(ASIC)的专用装置;处理器203可以是神经网络处理器或者是神经网络处理器和上述传统处理器的组合。
可选地,在本申请所述的各种实施例中,计算机系统201可位于远离自动驾驶车辆的地方,并且可与自动驾驶车辆无线通信。在其它方面,本申请所述的一些过程在设置在自动驾驶车辆内的处理器上执行,其它由远程处理器执行,包括采取执行单个操纵所需的动作。
计算机系统201可以通过网络接口229和软件部署服务器249通信。网络接口229可以是硬件网络接口,比如,网卡。网络227可以是外部网络,比如,因特网,也可以是内部网络,比如以太网或者虚拟私人网络(virtual private network,VPN)。可选地,网络 227还可以是无线网络,比如WiFi网络,蜂窝网络等。
如图3所示,硬盘驱动接口和系统总线205耦合,硬件驱动器接口231可以与硬盘驱动器233相连接,系统内存235和系统总线205耦合。运行在系统内存235的数据可以包括操作系统237和应用程序243。其中,操作系统237可以包括解析器(shell)239和内核(kernel)241。shell 239是介于使用者和操作系统之内核(kernel)间的一个接口。shell可以是操作系统最外面的一层;shell可以管理使用者与操作系统之间的交互,比如,等待使用者的输入,向操作系统解释使用者的输入,并且处理各种各样的操作系统的输出结果。内核241可以由操作系统中用于管理存储器、文件、外设和系统资源的那些部分组成。直接与硬件交互,操作系统内核通常运行进程,并提供进程间的通信,提供CPU时间片管理、中断、内存管理、IO管理等等。应用程序243包括控制汽车自动驾驶相关的程序,比如,管理自动驾驶的汽车和路上障碍物交互的程序,控制自动驾驶汽车路线或者速度的程序,控制自动驾驶汽车和路上其他自动驾驶汽车交互的程序。应用程序243也存在于软件部署服务器249的系统上。在一个实施例中,在需要执行自动驾驶相关程序247时,计算机系统201可以从软件部署服务器249下载应用程序。
例如,应用程序243还可以是自动驾驶汽车和路上车道线交互的程序,也就是说可以实时跟踪车道线的程序。
例如,应用程序243还可以是控制自动驾驶车辆进行自动泊车的程序。
示例性地,传感器253可以与计算机系统201关联,传感器253可以用于探测计算机201周围的环境。
举例来说,传感器253可以探测路上的车道,比如可以探测到车道线,并能够在车辆移动(如正在行驶)过程中实时跟踪到车辆前方一定范围内的车道线变化。又例如,传感器253可以探测动物,汽车,障碍物和人行横道等,进一步传感器还可以探测上述动物,汽车,障碍物和人行横道等物体周围的环境,比如:动物周围的环境,例如,动物周围出现的其他动物,天气条件,周围环境的光亮度等。
可选地,如果计算机201位于自动驾驶的汽车上,传感器可以是摄像头,红外线感应器,化学检测器,麦克风等。
示例性地,在车道线跟踪的场景中,传感器253可以用于探测车辆前方的车道线,从而使得车辆能够感知在行进过程中车道的变化,以据此对车辆的行驶进行实时规划和调整。
示例性地,在自动泊车的场景中,传感器253可以用于探测车辆周围的库位和周边障碍物的尺寸或者位置,从而使得车辆能够感知库位和周边障碍物的距离,在泊车时进行碰撞检测,防止车辆与障碍物发生碰撞。
在一个示例中,图2所示的计算机系统150还可以从其它计算机系统接收信息或转移信息到其它计算机系统。或者,从车辆100的传感系统120收集的传感器数据可以被转移到另一个计算机对此数据进行处理。
下面以图4为例进行介绍,图4是本申请实施例的一种云侧指令自动驾驶车辆的应用示意图。
如图4所示,来自计算机系统312的数据可以经由网络被传送到云侧的服务器320用于进一步的处理。网络以及中间节点可以包括各种配置和协议,包括因特网、万维网、 内联网、虚拟专用网络、广域网、局域网、使用一个或多个公司的专有通信协议的专用网络、以太网、WiFi和HTTP、以及前述的各种组合;这种通信可以由能够传送数据到其它计算机和从其它计算机传送数据的任何设备,诸如调制解调器和无线接口。
在一个示例中,服务器320可以包括具有多个计算机的服务器,例如负载均衡服务器群,为了从计算机系统312接收、处理并传送数据的目的,其与网络的不同节点交换信息。该服务器可以被类似于计算机系统312配置,具有处理器330、存储器340、指令350、和数据360。
示例性地,服务器320的数据360可以包括车辆周围道路情况的相关信息。例如,服务器320可以接收、检测、存储、更新、以及传送与车辆道路情况的相关信息。
例如,车辆周围道路情况的相关信息包括与车辆周围的其它车辆信息以及障碍物信息。
目前的车道线检测和车道线跟踪往往需要采用平面假设原理,比如,自动驾驶车辆处于平坦的区域,不存在坡道等情况;进而将图像坐标系中的车道线与自动驾驶车辆的自车坐标系中的车道线进行对应,完成车道的空间定位。但是平面假设,即假设车辆所处的路面完全平坦不存在坡道等,这在真实世界中往往难以实现,从而导致车道的定位的精度较低。另外,在车辆行进过程中,路面情况是实时变化的,因此,上述方法无法适应不断变化的路面情况。
此外,目前的车道线检测和车道线跟踪还往往需要假设路面上的车道线是互相平行的,但在路口附近经常会出现车道线数量发生变化的情况,例如在路口处多出一条或多条车道线,导致不满足车道线互相平行的前提,使得无法准确定位车道。
经分析,在现有技术的方案中,由于基于平面假设来进行车道线跟踪,相当于默认车体坐标系和图像坐标系的映射关系是一成不变的,也就是说,默认车体坐标系和图像坐标系之间是一直平行的,所以当将车体坐标系中得到的车道线模型的预测值通过这个一成不变的映射关系与图像坐标系中的检测到的车道线进行匹配时,就会出现较大偏差,但现有技术的方案忽略了这个偏差,导致车道线的跟踪结果误差较大,而在本申请实施例的方案中,能够获取的这个偏差并且消除它,从而提高车道线跟踪的准确性。在本申请实施例中,提供了一种车道线跟踪方法以及装置,通过实时获取图像坐标系和车体坐标系之间的映射关系,来消除路面不平的影响,换而言之,这样的实时映射关系能够捕捉到图像坐标系和车体坐标系之间的映射关系发生了变化,以及得到更为准确的映射关系,从而消除路面不平的影响。在本申请实施例中,还利用单应性矩阵来表示图像坐标系和车体坐标系之间的映射关系,以及利用实时的单应性矩阵来表示实时映射关系,并通过综合图像坐标系中获得的车道线检测信息以及车体坐标系中的车道线模型的预测值,来确定车道线跟踪结果,从而消除路面不平对车道线跟踪的影响,提高车道线跟踪的精度。此外,由于本申请实施例的方案是对每条车道线单独跟踪,从而不需要以车道线平行为前提,能够在任何城市道路场景中都适用,同样提高了车道的跟踪精度,且具有普适性,适用于所有城市道路。
图5是本申请实施例的车道线检测和跟踪装置的示意图。该装置500可以用于执行车道线检测过程和/或用于执行车道线跟踪过程。如图5所示该装置500可以包括感知模块510、车道线检测模块520和车道线跟踪模块530。
其中,感知模块510可以用于感知车辆行驶时路面以及周围环境的信息;感知模块可 以包括相机,相机可以用于图像采集,也就是可以用于感知车辆周围的环境信息。例如,可以设置利用相机获取车辆前方100米(meter,m)以内的图像。
车道线检测模块520可以用于实现像素坐标系(图像坐标系)下的车道线检测,该模块可以由深度学习网络构成,用于获取车道线的线型和像素点。例如,车道线检测模块520可以接收来自于感知模块510的图像,并对图像进行特征提取等处理,获得车道线的特征信息,进而根据车道线的特征信息进行分类和提取像素点,从而获得车道线的线型和车道线的像素点。
需要说明的是,上述线型可以理解为车道线的类型,例如实线、虚线、双线等在城市道路中常见的车道线线型。但应理解,上述几个例子只是提供的几种线型的示例,不存在限定,除了上述类型以外,可以是其他的线型,也就是说,只要是道路上已经存在的车道线类型或者未来可能根据政策变化等新产生的车道线类型都可以作本本申请所述车道线线型的示例。上述车道线的像素点可以理解为车道线在像素坐标系中的坐标,也可以理解为车道线在图像中的相对位置。
车道线跟踪模块530可以用于获取来自于车道线检测模块510的车道线线型和像素点,还可以用于获取来自于其他模块的自车信息。车辆的自车信息可以理解为用于描述车辆的状态、运动等的信息,自车信息可以包括车辆的自车运动信息,自车运动信息可以理解为动力学信息,例如可以包括车辆的车速或角速度等任意一种或多种动力学信息。车道线跟踪模块530可以根据上述车道线线型、车道线像素点、自车信息等中的一种或多种信息对车道线进行预测和/或对车道线模型进行更新。
可选地,车道线跟踪模块530可以用于根据当前时刻的自车信息,利用车道线预测方法(例如下文所述的IMM算法)获取下一时刻的车道线的预测值。在下一时刻时,利用下一时刻的车道线预测值和下一时刻获得的车道线像素点来计算单应性矩阵。当得到单应性矩阵后,利用单应性矩阵将下一时刻的车道线预测值转移到图像中,并根据下一时刻获得的车道线像素点和车道线预测值来确定车道线跟踪结果。该车道线跟踪结果可以是利用车道线预测值确定出来的最终的车道线预测值(例如下文的第二预测值)。
车道线跟踪模块530所输出的车道线跟踪结果还可以传送给其他模块,例如可以传送给可行驶区域检测模块,用于获得更为准确的可行驶区域,又例如可以传送给交通规则控制模块,用于控制车辆不要违反交通规则等等,又例如可以用于上述各应用场景,在此不再一一列举。
需要说明的是,图5所示各模块只是逻辑上的划分,对于划分方式不存在限定,还可以采用其他的划分方式。例如可以把车道线检测模块520作为车道线跟踪模块530的一部分,相当于在车道线跟踪模块530中设置一个获取模块,来获取车道线的线型和像素点;又例如可以把感知模块510作为独立于装置500的模块,也就是说,装置500只需要能够获取待处理图像即可,不需要实时拍摄图像;又例如可以把感知模块510和车道检测模块520均作为独立于装置500的模块,也就是说,车道检测模块530只需要能后获取车道线的检测信息(例如上述线型和像素点)即可;又例如可以把感知模块510和车道检测模块520均集成在车道线跟踪模块530中,还可以将二者设置为车道线跟踪模块530的获取模块。
图6是本申请实施例的车道线检测方法的示意性流程图。图6所示的检测方法可以由 图2所示车辆,或者,图3所示的自动驾驶系统,或者图5所示的检测系统来执行,例如可以由图5中的车道线检测模块520来执行,又例如还可以在车道线跟踪模块530中设置获取模块,由获取模块来执行。下面对图6所示方法的各步骤进行介绍。
601、对待处理图像进行特征提取,获取待处理图像中的车道线特征信息。
可选地,该待处理图像可以是利用相机或摄像头等获取的,可以是实时获取,也可以是从存储模块或装置或设备中读取。
需要说明的是,该车道线特征信息是指能够表示车道线的特征的信息。
可选地,可以利用神经网络方法获取上述特征信息,也可以采用其他方法获取上述特征信息。
602、根据车道线特征信息进行分类,获得车道线的线型。
可选地,可以利用基于神经网络的方法对车道线进行分类,来确定车道线的线型。该线型是指道路中可能存在的车道线的类型,例如在城市交通中车道线包括实线、虚线,双线等类型。但需要说明的是,在本申请实施例中,对于车道线的是直线还是曲线不做区分,因为车道线均可以用曲线表示,而直线可以看作是曲线的特例。
603、获取车道线的像素点。
可选地,可以利用回归方法或者利用实例分割方法获得图像中车道线像素点。回归方法相当于对图像中的车道线进行逐点提取,也就是说,可以得到能够表示车道线的一个个点。实例分割则可以理解为在像素级识别对象轮廓,相当于把图像中的车道线以条形框的形式分割出来,也就是说,利用实例分割方法可以得到能够表示车道线的一些条形框。
应理解,像素点可以解释为用于表示车道线在图像中的相对位置或坐标。还应理解,除了上述两种常见的获取车道线像素点的方法,还可以采用其他相同或相似功能的方法获取,本申请对此不做限定。
在本申请实施例中,可以利用图6所示车道线检测方法获得车道线在像素坐标系(图像坐标系)中车道线的检测信息,例如车道线的线型类别、像素点等信息,这些检测信息可以作为车道线跟踪方法的输入数据使用。也就是说,车道线的检测信息可以理解为利用车道线检测方法获取的图像坐标系中的车道线的信息,检测信息可以包括线型、像素点中的至少一种。
图7是本申请实施例的车道线跟踪方法的示意性流程图。下面结合图7所示各步骤进行介绍。
701、获取第一预测值,第一预测值用于表示车体坐标系中的车道线模型。
可选地,该第一预测值可以是在车体坐标系中,根据在先时刻的自车信息(例如自车运动信息)得到的对应于当前时刻的车道线的预测值。
车辆的自车信息可以理解为用于描述车辆的状态、运动等的信息,自车信息可以包括车辆的自车运动信息,自车运动信息可以理解为动力学信息,例如可以包括车辆的车速或角速度(例如横摆角速度)等任意一种或多种动力学信息。
可选地,可以利用上文所提供相关装置或模块,以实时获取或者从存储装置中读取的方式获取车辆的自车信息。
还应理解,第一预测值可以是从例如存储装置中读取,也可以是利用自车运动信息等进行运算得到,它可以是在在先时刻产生,也可以是在当前时刻产生,也就是说,可以是 在当前时刻之前得到了在先时刻的自车运动信息等信息后,进行预测得到了第一预测值;也可以是在先时刻的时候只存储了自车运动信息,不进行运算,等当前时刻的时候再进行运算,等等,在此不再一一列举。
第一预测值可以利用下面的方法获得。
需要说明的是,车道线模型可以理解为车道线的数学模型,或者可以理解为表示车道线的表达式,例如可以采用上文所述的曲线方程来表示车道线,又例如可以采用多项式来表示车道线,所以也可以称之为车道线方程、车道线曲线方程、车道线多项式等。
可选地,车道线模型可以用车道线的位置(截距)、角度、曲率、曲率的变化率、曲率半径等参数来表示。
可选地,车道线模型还可以用车道宽度、车中心偏离车道中心位置、车道线角度、曲率等参数来表示。应理解,还可以有其他的表示车道线的方式,只要能够表示出车道线的相对位置和变化趋势即可,不存在限定。
可选地,可以利用车道线预测方法或称之为车道线预测算法,根据当前时刻和在先时刻的时间间隔、在先时刻的自车信息,来获得当前时刻车道线在车体坐标系中的预测值,即第一预测值。
可选地,车道线预测方法例如可以采用IMM算法或者称之为IMM预测方法获得上述车道线的预测值。
需要说明的是,车道线预测算法也可以看作是利用车道线预测算法的模型对一些输入数据进行处理,来获得车道线模型的预测值,例如IMM算法的模型就可以称之为IMM预测模型、IMM算法模型等。可以看出,在本申请实施例中,车道线预测算法的模型和车道线模型是不同的概念,车道线预测算法的模型是用于获得车道线模型的预测值的预测算法的模型,而车道线模型则是指车道线在车体坐标系中的曲线方程或者称之为数学表达式。
图8是本申请实施例的利用IMM预测车道线的示意性流程图,下面结合图8,以IMM算法为例,对本申请实施例所提供的利用车道线预测方法获得当前时刻车道线在车体坐标系中的预测值进行介绍。
可选地,可以利用EKF滤波器对车体坐标系下的车道线模型进行滤波处理,以提高车道线模型的准确性。
801、根据时间间隔确定是否预测。
可选地,根据输入图像的时间戳所确定的时间间隔,判断是否需要预测,也就是根据输入图像的前后两帧图像所对应的时刻所确定的时间间隔,判断是否需要改变滤波器的状态值,从而获得车道线预测值。滤波器的状态值也可以理解为滤波器的系数,或者滤波器的状态、参数等。
该时间戳可以理解为上文所述图像帧所对应的时刻。
可选地,可以设置当两帧图像所确定的时间间隔满足下面的式子时,将滤波器设置为可更新或称之为工作模式,也可以称之为启动滤波器。
MinLoopTime≤ΔT≤MaxLoopTime,   (2)
其中,ΔT=T(t)-T(t-1)表示前后两帧图像所确定的时间间隔,单位可以用毫秒(millisecond,ms)来表示;T(t)和T(t-1)可以分别表示输入图像中对应较早时刻的时间戳和 对应较晚时刻的时间戳,MaxLoopTime表示最大时间间隔阈值,例如可以设置为200毫秒(millisecond,ms);MinLoopTime表示最小时间间隔阈值,例如可以设置为10ms。当ΔT满足上式(2)时,可以将滤波器设置为可更新或称之为工作模式,也可以称之为启动滤波器。
需要说明的是,假设将两帧图像中时间戳较晚的时刻作为当前时刻,则时间戳较早的时刻可以作为在先时刻;还可以将时间戳较晚的时刻作为在后时刻,时间戳较早的时刻作为当前时刻。也就是说,两帧图像可以是连续的,也可以不是连续的。当两帧图像是连续的时候,可以认为两帧图像分别是上一时刻和当前时刻的图像,还可以认为两帧图像分别是当前时刻和下一时刻的图像。
还需要说明的是,在步骤901中,通过判断在先时刻(例如上述T(t-1))和当前时刻(例如上述T(t))的时间间隔是否在预设范围内,从而决定是否进行预测。原因在于,当时间间隔过大的时候,对应于实际相当于车辆可能已经行驶出较远的距离,车道线可能已经有了很大变化,还可能已经不是在一条道路上行驶,此时,这种预测很可能导致误差较大等情况。而当时间间隔过小的时候,对应于实际相当于车辆可能几乎没往前移动,此时,这种预测虽然依旧会准确,但由于变化很小可能参数几乎不变,所以这种太过频繁的预测,反而会在一定程度上造成资源的浪费。因此,只有当时间间隔在预设范围内时才进行预测,能够在获得较为准确的车道线预测值的同时,减少占用资源。
还应理解,对于上述预设范围不存在数值等限定,例如,上述预设范围的上限值(MaxLoopTime)可以根据经验或者试验等方法设定为不同于上述例子中200ms的其他值,上述预设范围的下限值(MinLoopTime)也可以根据经验或者试验等方法设定为不同于上述例子中10ms的其他值,在此不再赘述。
802、利用滤波器,根据自车信息获取预测值。
可选地,可以利用在先时刻的自车信息和在先时刻的滤波器,以及在先时刻据当前时刻的时间间隔,来获得当前时刻的车道线预测值。
可选地,可以利用车辆的在先时刻(例如上一时刻)自车信息以及在先时刻(例如上一时刻)的滤波器状态计算当前时刻的滤波器的估计,进而获得当前时刻的车道线预测值。
可选地,可以利用状态空间表示车道线跟踪方法中的跟踪器(滤波器)。该跟踪器的状态空间包括以下至少一种参数:车道线模型(车道线曲线方程)的曲率变化、车道线的曲率、车道线在车体坐标系原点处的切线的斜率、车道线在车体坐标系原点处的偏移量。例如,假设需要跟踪左左车道线、左车道线、右车道线和右右车道线共4条车道线,则可以利用下面的式子构造跟踪器的状态空间X,其中,X为9维向量。
Figure PCTCN2020087506-appb-000002
其中,curvature change表示车道线模型的曲率变化,curvature表示车道线的曲率,slope表示车道线在车体坐标系的原点处的切线的斜率,offset leftneighbor,offset left,offset right,offset rightneighbor分别表示左左车道线在车体坐标系原点处的偏移量、左车道线在车体坐标系原点处的偏移量、右车道线在车体坐标系原点处的偏移量, 以及右右车道线在车体坐标系原点处的偏移量,pitch表示俯仰角,yaw表示偏航角。
可选地,可以利用下面的式子对跟踪器(滤波器)的状态进行更新。
Figure PCTCN2020087506-appb-000003
Figure PCTCN2020087506-appb-000004
Figure PCTCN2020087506-appb-000005
其中,假设当前时刻为t,用t-1表示前一时刻或者称之为上一时刻。在上面的式子中,X(t-1|t-1)表示t-1时刻的滤波器的状态所构成的状态向量/矩阵,也可以称之为滤波器的系数所构成的状态向量/矩阵,
Figure PCTCN2020087506-appb-000006
表示根据t-1时刻的滤波器的状态获得的t时刻的状态估计;dt=ΔT,表示t时刻和t-1时刻的时间间隔;l表示t时刻和t-1时刻的距离间隔,可以利用式子l=dt*v ego获得;v ego表示车辆的行驶速度,或者称之为车速;phi ego表示车辆的角速度,例如横摆角速度;F表示状态转移矩阵;g可以理解为状态的增益,相当于t时刻相对于t-1时刻的状态的变化值。
对于跟踪器的协方差阵的更新,可以利用下面的式子进行。
Figure PCTCN2020087506-appb-000007
Q=q 1*q 2,          (8)
其中,
Figure PCTCN2020087506-appb-000008
表示根据t-1时刻的协方差阵的值获得的t时刻的协方差阵的估计;P(t-1|t-1)表示t-1时刻的协方差阵;Q表示系统噪声的协方差阵,q表示系统的噪声矩阵/向量,上面式子(8)中的。也就是说,利用系统的噪声矩阵/向量构造噪声的协方差阵。
假设用Q ij表示Q中的第i行第j列元素,用q i1表示q 1的第i个元素,用q 2j表示q 2的第j个元素,则Q ij=q i1*q 2j,其中,i、j均为整数。结合上面的式子(4)-(6)可以设置Q为9行9列的矩阵,q 1为9*1的列向量,q 2为1*9的行向量。
可选地,在初始时,可以设置q 1和q 2对应的元素相等,也就是说,相当于q 1=q 2 T
综合式子(4)至(8)可以看出,步骤902主要根据在先时刻(例如上述t-1时刻)和当前时刻(例如上述t时刻)的时间间隔(例如上述ΔT)和在先时刻的自车信息(例如上述车速、角速度),更新当前时刻的滤波器状态,从而获得当前时刻的车道线预测值。
需要说明的是,在图8所示方法中,步骤801也可以不执行,当步骤801执行时,能 够获得更好的预测结果。原因在于,当时间间隔过大的时候,对应于实际相当于车辆可能已经行驶出较远的距离,车道线可能已经有了很大变化,还可能已经不是在一条道路上行驶,此时,这种预测很可能导致误差较大等情况。而当时间间隔过小的时候,对应于实际相当于车辆可能几乎没往前移动,此时,这种预测虽然依旧会准确,但由于变化很小可能参数几乎不变,所以太过频繁的预测运算,反而会在一定程度上造成资源的浪费。因此,图8所示方法能够在获得较为准确的车道线预测值的同时,节省占用的资源。
702、获取第一检测信息,获取第一检测信息,该第一检测信息包括当前时刻车道线在图像坐标系中的像素点。
可选地,车道线的检测信息可以包括车道线的像素点信息,还可以包括车道线的线型。例如,第一检测信息可以包括当前时刻车道线在图像坐标系中的像素点,也可以包括当前时刻车道线的线型。
需要说明的是,当前时刻是通过图像帧所对应的时刻来确定的,例如,假设以将某一帧图像所对应的时刻作为当前时刻;则在先时刻是指图像帧中所对应的时刻早于该当前时刻的时刻,在先时刻可以包括上一时刻;在后时刻是指图像帧中所对应的时刻晚于该当前时刻的时刻,在后时刻可以包括下一时刻。
可选地,可以利用上文所提供的相关装置,利用图6所示方法实时获取检测信息,也可以从存储装置中获取上述检测信息。
应理解,步骤701和步骤702可以同时执行也可以不同时执行,且执行的先后顺序也不存在限定,此外获取的方式也可以相同或者不同。
703、根据第一预测值和第一检测信息,确定第一映射关系。
需要说明的是,该第一映射关系用于表示图像坐标系和车体坐标系之间的实时映射关系,对于当前时刻来说,第一映射关系即当前时刻图像坐标系和车体坐标系之间的映射关系。
可选地,该第一预测值可以是在车体坐标系中,根据在先时刻的自车信息(例如自车运动信息)得到的对应于当前时刻的车道线的预测值。
车辆的自车信息可以理解为用于描述车辆的状态、运动等的信息,自车信息可以包括车辆的自车运动信息,自车运动信息可以理解为动力学信息,例如可以包括车辆的车速或角速度(例如横摆角速度)等任意一种或多种动力学信息。
可选地,可以利用上文所提供相关装置或模块,以实时获取或者从存储装置中读取的方式获取车辆的自车信息。
需要说明的是,步骤703中,将第一预测值和第一检测信息进行综合考虑来确定实时的映射关系,并进一步确定第二预测值,相当于利用当前时刻的车道线预测值和当前时刻的车道线检测信息来确定当前时刻的更为准确的预测值(或者可以理解为获得更为准确的车道线模型)。假设改变当前时刻,把原本的在先时刻作为当前时刻,把原本的当前时刻作为下一时刻,也就是相当于向前推进一个时刻,则步骤702相当于利用当前时刻的自车信息获得下一时刻的车道线预测值,然后利用下一时刻的图像坐标系中的车道线检测信息和下一时刻的车道线预测值来确定下一时刻的更为准确的预测值(或者可以理解为获得更为准确的车道线模型)。
可选地,在步骤703,可以利用单应性矩阵来描述图像坐标系和车体坐标系之间的映 射关系,则某一时刻的单应性矩阵可以用于表示该时刻的图像坐标系和车体坐标系之间的映射关系,也就是说,可以利用实时的单应性矩阵来表示实时映射关系。
可选地,可以设置一个或多个(即至少一个)单应性矩阵。
也就是说,可以利用车道线在车体坐标系中的预测值和车道线的像素点计算单应性矩阵。可选地,可以采用下面的方法计算当前时刻的单应性矩阵。
首先,获取初始单应性矩阵,包括:确定车辆在图像中的位置,并在车辆的前方确定多个在车体坐标系中已知的标志物,根据车体和多个标志物在图像中的坐标信息来计算单应性矩阵。初始单应性矩阵也可以理解为单应性矩阵的初始值,用于表示车体坐标和图像坐标系之间的单应性关系的初始值。
举例说明,把车固定在车棚中某个位置,并在车的前方10m-50m,打点标记数量大于4个的明显的标志物(这些标志物的位置在车体坐标系中已知),之后在图像中通过特征提取找到对应的标志物,根据标志物在车体和图像之间的坐标信息,计算当地水平面和图像之间的单应性矩阵。需要说明的是,由于单应性矩阵的自由度为8,每个点有x、y两个坐标约束,因此至少需要打点标记4个点。
需要说明的是,初始单应性矩阵不需要每次都重新获取,也就是说,在车道线跟踪过程中,初始单应性矩阵可以在第一次获取后,一段时间内持续使用。
还应理解,初始单应性矩阵用于表示初始映射关系,也就是图像坐标系和车体坐标系之间的映射关系的初始值。
之后,计算当前时刻的单应性矩阵,可以根据在车体坐标系下的车道线预测值(例如第一预测值),根据初始单应性矩阵转移到图像平面(图像坐标系所确定的平面)内,和图像平面内的车道线(例如第一检测信息)做匹配,然后最小化匹配的误差,通过迭代的方法最终求出车前方区域的单应性矩阵。还可以分区域、实时计算单应性矩阵。下面结合图9,以计算3个单应性矩阵的计算为例,对实时获取单应性矩阵的方法进行介绍,应理解,当只设置一个单应性矩阵的时候,图9所示方法同样适用。
图9是本申请实施例的单应性矩阵的计算流程示意图。下面对图9所示各步骤进行介绍。
可选地,可以划分区域,并计算不同区域的单应性矩阵,例如可以将车辆的前方按距离划分为多个区域,获取对应于多个区域的多个单应性矩阵,其中,多个区域中的每个区域均分别对应至少一个单应性矩阵。
进一步举例说明,可以根据实践或者基于试验数据等把车辆前方划分出三个区域,分别为0-20m、20m-50m和50m-100m,则可以将这三个区域分别对应三个单应性矩阵H0、H1和H2,也就是说,0-20m对应矩阵H0,20m-50m对应矩阵H1,50m-100m对应矩阵H2。
901、获得初始单应性矩阵。
需要说明的是,初始单应性矩阵不需要每次都重新获得,可以是一段时间例如一周、几周、一个月、几个月等,利用上面提供的方法标定一次。
可选地,当进行上述区域划分时,可以分别获得H0、H1和H2的初始单应性矩阵。但应理解,当进行其他方式的划分时,依然可以获得每个区域的初始单应性矩阵。
902、获得当前时刻车体坐标系下的车道线的第一预测值。
可选地,可以利用上文所提供的相关方法获得第一预测值,例如利用图7所示步骤701中提供的相关方法获得。
903、根据第一预测值和初始单应性矩阵,获得对应于第一预测值的第四预测值。
第四预测值可以理解为用于表示第一预测值在初始映射关系下的对应值,或者可以理解为第一预测值在初始图像坐标系中的对应值。由于初始单应性矩阵可以看作是表示车体坐标系平面和初始图像平面(初始图像坐标系)之间的映射关系,因此,第四预测值相当于在初始图像平面中第一预测值的对应值。步骤903还可以看作是将车体坐标系下的车道线转移到初始图像平面内。
可选地,可以使得初始单应性矩阵满足式子p'=Hp,其中p'用于表示IMM预测得到的车道线通过初始单应性矩阵对应的在初始图像坐标系下的坐标,相当于第四预测值;p用于表示车体坐标系中的车道线模型的预测值,可以理解为第一预测值;H表示初始单应性矩阵。
904、利用第一检测信息和第四预测值的差值,确定当前时刻的单应性矩阵。
可选地,可以根据预测得到的车道线在初始图像坐标系下的坐标和当前时刻图像坐标系中原车道线的坐标,构造损失函数(cost function),例如可以利用二者的像素差(此处可以理解为坐标差,或者二者之间的距离)来构造损失函数。
可选地,可以利用下面的式子f=∑(p image-p′)构造损失函数,其中,f表示函数,p'的含义与步骤903所述一致,p image用于表示利用上文所提供的车道线检测方法获得的图像坐标系中的车道线检测信息。
通过最小化损失函数,利用迭代的方法获得当前时刻的单应性矩阵,例如获得上述如H0、H1和H2。该当前时刻的单应性矩阵可以用于表示上述第一映射关系,也就是当前时刻图像坐标系和车体坐标系之间的映射关系。
在获得当前时刻的单应性矩阵后,可以利用当前时刻的单应性矩阵将第一预测值从车体坐标系转移至图像坐标系中。当包括多个单应性矩阵,例如通过区域划分出多个单应性矩阵时,可以利用图9所示方法获得例如三个区域的单应性矩阵H0、H1和H2,再分别利用上述三个单应性矩阵分区域地将车道线的预测值转移到图像坐标系中。
需要说明的是,由于车辆行驶过程中,路面的变化会反映为图像所确定的平面的变化、车体坐标系和图像坐标系之间的映射关系存在变化,而初始单应性矩阵相当于确定了车体坐标系和图像坐标系之间的初始映射关系,当前时刻的单应性矩阵则相当于确定了车体坐标系和时刻的图像坐标系之间的当前时刻的映射关系(实时映射关系)。因此,在图9所示方法中,利用车道线模型的预测值在初始图像坐标系中的对应值和当前时刻图像坐标系中的对应值的差值,来构造和最小化损失函数,以及通过迭代的方式获得当前时刻的车体坐标系和图像坐标系之间的映射关系,也就是实时的单应性矩阵。也可以理解为,在图9所示方法中,根据车道线模型的预测值在初始映射关系下的对应值和在当前时刻的映射关系下的对应值之间的差值来构造和最小化损失函数,以及通过迭代的方式获得对应于当前时刻的实时映射关系。但应理解,其它时刻的单应性矩阵(或其它时刻的映射关系)的获取方法可以与当前时刻的单应性矩阵的获取方法相同,相当于将“其它时刻”看作是“当前时刻”,或者可以看作是将上述步骤中的“当前时刻”替换为“其它时刻”即可。
704、根据第一映射关系,确定第二预测值。
需要说明的是,第二预测值用于表示第一预测值的修正值,也就是说,第二预测值可以理解为是对第一预测值进行修正之后的预测值。
可选地,可以先根据第一映射关系和第一预测值,获得第三预测值,该第三预测值用于表示第一预测值在第一映射关系下的对应值。可选地,可以利用第一映射关系(例如利用当前时刻的单应性矩阵),将第一预测值转移到当前时刻的图像坐标系,获得对应于第一预测值的第三预测值,该第三预测值也就是根据第一预测值和第一映射关系(例如利用当前时刻的单应性矩阵)确定的在当前时刻的图像坐标系中第一预测值的对应值。
可选地,获得第三预测值之后,可以根据第一检测信息对第三预测值进行调整,从而获得第二预测值。
可选地,可以计算第一预测值转移到当前时刻图像坐标系中的车道线模型的预测值(第三预测值)所对应的车道线和图像坐标系中的原车道线检测信息(第一检测信息)所对应的车道线像素点之间的马氏距离,并将马氏距离最小的值所对应的车道线的预测值作为修正后的车道线模型的预测值(第二预测值)。
需要说明的是,此处相当于将至少一条车体坐标系中的车道线与图像坐标中的各车道线进行对应,从而将马氏距离最小的线的信息作为此次的测量量。举例说明,假设在车体坐标系中可以获取2条车道线,分别为左车道线和右车道线,而在图像坐标系中存在3条车道线,则能够根据上述方法将3条车道线中的2条车道线分别对应到上述左车道线和右车道线。
可选地,还可以利用第二预测值对车道线预测算法的模型进行更新,当该更新后的模型用于后续预测过程时,可以获得更为准确的车道线模型的预测值。
可选地,还可以根据第二预测值,更新车体坐标系中的车道线模型。
可选地,可以采用图10所示的方法,更新车体坐标系中的车道线模型。
图10是本申请实施例的更新车体坐标系中车道线模型的方法的示意性流程图。下面结合图10所示各步骤进行介绍。
1001、根据图像坐标系中车道线模型的预测值的点序列,获得至少一组拟合直线段。
步骤1001相当于计算图像中描述车道线的点序列在车辆坐标系下地面上的投影。
可选地,可以先对图像中的点序列进行去畸变处理,再利用旋转平移矩阵和相机(图像)的参数将去畸变后的点序列转移至车体坐标系中。旋转平移矩阵是指从图像坐标系转换到车体坐标系时的对应矩阵。
可选地,可以利用下面的式子获得图像坐标系中的点序列在车体坐标系中所对应的坐标。
(x u,y u)=f(x d,y d),         (9)
Figure PCTCN2020087506-appb-000009
其中,x d,y d表示图像中带畸变的点的坐标;x u,y u表示图像中去除畸变后的点的坐标;x w,y w表示图像中的点投影在车体坐标系下地面上的坐标;K表示相机(图像)的内参矩阵;[R|T]表示从图像坐标系转换到车体坐标系的旋转平移矩阵。
需要说明的是,上述式子(9)表示先对图像中的点序列进行去畸变处理,f(x d,y d)可以理解为对图像中带畸变的点(x d,y d)进行的处理操作,例如可以是一个函数。上述式子(10)表示利用图像坐标系和车体坐标系之间的旋转平移矩阵、相机所对应的参数,将去畸变后的点在图像坐标系中的坐标转换为车体坐标系中的坐标。K -1表示对矩阵K求逆,或者称之为矩阵K的逆矩阵。应理解,上述式子(9)和(10)只是一种示例,也可以采用其他相近或相似的公式获得相同或相似的效果。
之后,可以将图像中车道线的各点序列在图像中拟合成至少一个直线段。
1002、获取至少一个拟合直线段的至少一组测量量。
该测量量包括直线段的以下至少一种参数:斜率、截距或中心点在车体坐标系中的坐标。
可选地,可以将车道线的点序列换分成每三个点为一组,在图像坐标系中拟合成多条直线(或者线段),也就是每组点拟合成一条直线,并生成该多条直线的多组测量值。例如可以包括直线的斜率、截距、中心点(即每条拟合线段的中心点)在车体坐标系下的X轴的坐标x w
举例说明,假设某一组(也可以称之为某一个集合)的三个点的坐标分别为(x 0,y 0)、(x 1,y 1)和(x 2,y 2),其中y 0<y 1<y 2。用Y means表示一组测量量,则Y means中可以包括该组点所确定的直线的斜率、截距以及中心点(上述y 1)在斜率方向上的投影。假设用Y means[0]表示Y means的第0位元素,用Y means[1]表示Y means的第1位元素,以此类推。
在一个例子中,Y means[0]和Y means[1]可以利用下面的式子获得。
Figure PCTCN2020087506-appb-000010
Y means[0]=x 1-Y means[1]*y 1,        (12)
从上面的式子可以看出,Y means中的第1位元素是根据(x 0,y 0)和(x 2,y 2)两个点所确定的直线的斜率,而Y means中的第0位元素则是(x 1,y 1)在斜率方向上的投影大小。
1003、根据上述至少一组测量量对车体坐标系中的车道线模型进行更新。
可选地,可以建立上述至少一组测量量和在车体坐标系中获得的预测值之间的相对关系,从而根据二者的关系来更新车道线模型。例如,可以利用二者的关系,通过构建测量残差协方差矩阵来实现。
更新车道线模型可以包括更新车道线模型(车道线曲线方程)的斜率、截距等参数。
在本申请技术方案中,通过综合车道线在车体坐标系中的预测值和在图像坐标系中的检测信息,利用预测值和检测信息来获得实时的两个坐标系之间的映射关系,从而能够消除路面变化的影响,原因在于,这些路面变化会带来两个坐标系之间的映射关系的变化,而实时获取映射关系则相当于实时捕捉到映射关系的变化。另外,利用更为准确的车道线模型的预测值来更新车体坐标系的车道线预测算法的模型,相当于实时更新车体坐标系中的车道线预测算法的模型参数,能够加速预测算法的收敛,以及当利用该更新后的预测算法模型预测车道线模型时,能够提高预测到的车道线模型的准确性。此外,分区域设置单应性矩阵能够得到更精确的两个坐标系之间的映射关系,能够进一步提高车道线跟踪的精度或称之为准确性。
除上述技术效果以外,在本申请技术方案中,不需要以假设路面平整为前提,因此对 于所有城市道路都是通用的。另外,在本申请技术方案中不需要再受车道线平行假设的局限,对于不平行的车道线依然适用,因此具有更好的通用性或者称之为普适性。在现有技术中,往往只需要跟踪一条或部分车道线,然后再跟据平行假设推算剩余的其他车道线,因此准确性较低,且当遇到车道线不平行的情况时甚至会出现车道线跟踪失败的情况,跟踪的准确性过低影响使用,而与之不同的是,在本申请技术方案中对所有车道线都能够进行跟踪,精度更高,且不需要再考虑车道线是不是平行,具有普适性。
可选地,还可以将上述方法获得的车道线模型的预测值应用于其他模块或装置,来提高其他模块或装置的处理精度,下面以应用于路径规划和预警策略规划为例进行介绍。
可选地,可以将本申请实施例的车道线跟踪的方法应用于路径规划,该规划路径的方法包括:获取路径规划信息,根据第二预测值和路径规划信息,生成下一时段或下一时刻的路径规划方案。路径规划信息可以包括以下至少一种:道路信息、交通信息或自车信息;其中,道路信息可以包括以下至少一种:路障信息、道路的宽度信息或道路的长度信息;交通信息可以包括以下至少一种:红绿灯信息、交通规则信息、周围其他车辆的行驶信息或路况信息;自车信息可以包括以下至少一种:自车运动信息、位置信息、形态信息、结构信息,自车运动信息可以包括车辆的角速度、速度等,位置信息可以理解为车辆当前的位置,形态信息可以理解为车辆的形状、造型、尺寸等,结构信息可以理解为车辆的各个组成部分,例如可以分为车头、车身等。
可选地,还可以获取当前时刻的可行驶区域信息,从而根据第二预测值、路径规划信息和可行驶区域信息,确定下一时段或下一时刻的车道级的路径规划方案。
在该路径规划的方法中,利用本申请实施例所提供的车道线跟踪方法,能够获得更为准确的车道线模型的预测值,从而能够提高所规划路径的准确性。
可选地,可以将本申请实施例的车道线跟踪的方法应用于预警策略规划,该预警策略规划的方法包括:获取预警信息,根据第二预测值、道路信息和预设预警阈值,生成预警信号;根据预警信号生成预警策略规划信息,该预警策略规划信息用于表示对所述预警信号的响应策略;预警信息可以包括以下至少一种:车辆的位置信息、交通信息、路障信息。
在该预警策略规划的方法中,利用本申请实施例所提供的车道线跟踪方法,能够获得更为准确的车道线模型的预测值,从而能够提高预警的准确性。
上文对本申请实施例的车道线跟踪的方法进行了介绍,下面对本申请实施例的车道线跟踪的装置进行介绍。应理解,下文中介绍的车道线跟踪装置能够执行本申请实施例的车道线跟踪的方法的各个过程,下面在介绍装置的实施例时,会适当省略重复的描述。
本申请实施例提供一种车道线跟踪的装置,该装置包括获取单元和处理单元。该装置可以用于执行本申请实施例的车道线跟踪的方法的各步骤。例如,获取单元可以用于执行图7所示方法中的步骤701和步骤702,处理单元可以用于执行图7所示方法中的步骤703和704。又例如,处理单元还可以用于执行图8所示方法中的各步骤。又例如,处理单元还可以用于执行图9所示方法中的各步骤。又例如,处理单元还可以用于执行图10所示方法中的各步骤。
当获取单元用于执行步骤701时,假设是采用图6所示方法来执行步骤702,则这种情况相当于获取单元执行步骤601-603,但应理解,也可以不采用图6所示方法来执行步骤702,例如可以从存储装置中直接读取已经存储好的第一检测信息,在此不再列举。
可选地,获取单元可以通过下面几种方式获取第一预测值。
在一种实现方式中,获取单元可以通过获取在先时刻的自车运动信息等数据以及对获取的数据进行处理来获取第一预测值。
可选地,获取单元可以用于实现图2所示用户接口127的功能或用于实现图3所示I/O接口215的功能,以执行获取在先时刻的自车运动信息等数据的操作,例如图3中所示,可以利用I/O接口215,从输入设备217、收发器223等来获取在先时刻的自车运动信息等数据。在这种情况下,获取单元还可以用于实现图2所示控制系统130的部分功能或用于实现图3所示处理器203的部分功能,以执行对获取到的数据进行处理从而获得第一检测信息的操作,例如利用车道线预测算法预测得到第一预测值。
在另一种实现方式中,获取单元还可以直接获取第一预测值,例如从存储装置中获取第一预测值,在这种情况下,获取单元可以用于实现图2所示用户接口127的功能或用于实现图3所示I/O接口215的功能,以执行获取第一预测值的操作。
可选地,获取单元可以通过下面几种方式获取第一检测信息。
在一种实现方式中,获取单元可以通过获取图像以及对获取的图像进行处理来获取第一检测信息。
可选地,获取单元可以用于实现图2所示相机125的功能或用于实现图3所示摄像头255的功能,以执行采集图像的操作。在这种情况下,获取单元还可以用于实现图2所示计算机视觉系统134的部分功能或用于实现图3所示处理器203的部分功能,以执行对采集到的图像进行处理从而获得第一检测信息的操作。
可选地,获取单元可以用于实现图2所示用户接口127的功能或用于实现图3所示I/O接口215的功能,以执行获取图像的操作,例如图3中所示,可以利用I/O接口215,从输入设备217、媒体盘221、收发器223或摄像头255等来获取图像。在这种情况下,获取单元还可以用于实现图2所示计算机视觉系统134的部分功能或用于实现图3所示处理器203的部分功能,以执行对获取到的图像进行处理从而获得第一检测信息的操作。
在另一种实现方式中,获取单元还可以直接获取第一检测信息,例如从存储装置中获取第一检测信息,在这种情况下,获取单元可以用于实现图2所示用户接口127的功能或用于实现图3所示I/O接口215的功能,以执行获取第一检测信息的操作。
可选地,处理单元可以用于根据自车信息对车道线进行预测以及用于根据预测值和图像中获得的检测信息来确定车道线跟踪结果等过程。
可选地,处理单元可以用于实现图2所示计算机视觉系统134、路线控制系统135或障碍规避系统136等中的一种或多种系统的功能或用于实现图3所示处理器203的功能,以执行获得车道线跟踪结果(例如上文所述第二预测值或利用第二预测值更新后的车道线模型)的操作、利用车道线跟踪结果进行路径规划的操作、利用车道线跟踪结果进行预警策略规划的操作等。
本申请实施例提供一种车道线跟踪的装置。该装置包括存储器、处理器、通信接口以及总线。其中,存储器、处理器、通信接口通过总线实现彼此之间的通信连接。
可选地,存储器可以是只读存储器(read only memory,ROM),静态存储设备,动态存储设备或者随机存取存储器(random access memory,RAM)。存储器可以存储程序,当存储器中存储的程序被处理器执行时,处理器和通信接口用于执行本申请实施例的车道 线跟踪的方法的各个步骤。
可选地,存储器可以具有图2所示存储器152的功能或者具有图3所示系统内存235的功能,或者具有图4所示存储器340的功能,以实现上述存储程序的功能。可选地,处理器可以采用通用的CPU,微处理器,ASIC,图形处理器(graphic processing unit,GPU)或者一个或多个集成电路,用于执行相关程序,以实现本申请实施例的车道线跟踪的装置中的单元所需执行的功能,或者执行本申请实施例的车道线跟踪的方法的各个步骤。
可选地,处理器可以具有图2所示处理器151的功能或者具有图3所示处理器203的功能,或者具有图4所示处理器330的功能,以实现上述执行相关程序的功能。
可选地,处理器还可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,本申请实施例的车道线跟踪的方法的各个步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。
可选地,上述处理器还可以是通用处理器、数字信号处理器(digital signal processing,DSP)、专用集成电路(ASIC)、现成可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成本申请实施例的车道线跟踪的装置中包括的单元所需执行的功能,或者执行本申请实施例的车道线跟踪的方法的各个步骤。
可选地,通信接口可以使用例如但不限于收发器一类的收发装置,来实现装置与其他设备或通信网络之间的通信。例如,可以通过通信接口获取第一检测信息。
总线可包括在装置各个部件(例如,存储器、处理器、通信接口)之间传送信息的通路。
本申请实施例还提供一种包含指令的计算机程序产品,该指令被计算机执行时使得该计算机实现上述方法实施例中的方法。
上述提供的任一种通信装置中相关内容的解释及有益效果均可参考上文提供的对应的方法实施例,此处不再赘述。
除非另有定义,本申请所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本申请中在本申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本申请。
可选地,本申请实施例中涉及的网络设备包括硬件层、运行在硬件层之上的操作系统层,以及运行在操作系统层上的应用层。其中,硬件层可以包括CPU、内存管理单元(memory management unit,MMU)和内存(也称为主存)等硬件。操作系统层的操作系统可以是任意一种或多种通过进程(process)实现业务处理的计算机操作系统,例如,Linux操作系统、Unix操作系统、Android操作系统、iOS操作系统或windows操作系统等。应用层可以包含浏览器、通讯录、文字处理软件、即时通信软件等应用。
本申请实施例并未对本申请实施例提供的方法的执行主体的具体结构进行特别限定, 只要能够通过运行记录有本申请实施例提供的方法的代码的程序,以根据本申请实施例提供的方法进行通信即可。例如,本申请实施例提供的方法的执行主体可以是终端设备或网络设备,或者,是终端设备或网络设备中能够调用程序并执行程序的功能模块。
本申请的各个方面或特征可以实现成方法、装置或使用标准编程和/或工程技术的制品。本申请中使用的术语“制品”可以涵盖可从任何计算机可读器件、载体或介质访问的计算机程序。例如,计算机可读介质可以包括但不限于:磁存储器件(例如,硬盘、软盘或磁带等),光盘(例如,压缩盘(compact disc,CD)、数字通用盘(digital versatile disc,DVD)等),智能卡和闪存器件(例如,可擦写可编程只读存储器(erasable programmable read-only memory,EPROM)、卡、棒或钥匙驱动器等)。
本申请描述的各种存储介质可代表用于存储信息的一个或多个设备和/或其它机器可读介质。术语“机器可读介质”可以包括但不限于:无线信道和能够存储、包含和/或承载指令和/或数据的各种其它介质。
需要说明的是,当处理器为通用处理器、DSP、ASIC、FPGA或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件时,存储器(存储模块)可以集成在处理器中。
还需要说明的是,本申请描述的存储器旨在包括但不限于这些和任意其它适合类型的存储器。
本领域普通技术人员可以意识到,结合本申请中所公开的实施例描述的各示例的单元及步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的保护范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。此外,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上,或者说对现有技术做出贡献的部分,或者该技术方案的部分,可以以计算机软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,该计算机软件产品包括若干指令,该指令 用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。前述的存储介质可以包括但不限于:U盘、移动硬盘、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (20)

  1. 一种车道线跟踪的方法,其特征在于,包括:
    获取第一预测值,所述第一预测值用于表示车体坐标系中的车道线模型,所述第一预测值是利用在先时刻的自车运动信息预测得到的;
    获取第一检测信息,所述第一检测信息包括当前时刻车道线在图像坐标系中的像素点;
    根据所述第一预测值和所述第一检测信息,确定第一映射关系,所述第一映射关系用于表示所述图像坐标系与所述车体坐标系之间的实时映射关系;
    根据所述第一映射关系确定第二预测值,所述第二预测值用于表示所述第一预测值的修正值。
  2. 如权利要求1所述的方法,其特征在于,所述根据所述第一映射关系确定第二预测值,包括:
    根据所述第一映射关系和所述第一预测值,获得第三预测值,所述第三预测值用于表示所述第一预测值在所述第一映射关系下的对应值;
    利用所述第一检测信息对所述第三预测值进行调整,以获得所述第二预测值。
  3. 如权利要求1或2所述的方法,其特征在于,所述根据所述第一预测值和所述第一检测信息,确定第一映射关系,包括:
    根据所述第一预测值和初始单应性矩阵,获得第四预测值,所述初始单应性矩阵用于表示所述图像坐标系和所述车体坐标系之间的初始映射关系,所述第四预测值用于表示所述第一预测值在所述初始映射关系下的对应值;
    根据所述第一检测信息和所述第四预测值,确定实时的单应性矩阵,所述实时的单应性矩阵用于表示所述第一映射关系。
  4. 如权利要求3所述的方法,其特征在于,所述方法还包括:
    将车辆的前方按距离划分为多个区域;
    获取对应于所述多个区域的多个所述实时的单应性矩阵,其中,所述多个区域中的每个区域均分别对应至少一个所述实时的单应性矩阵。
  5. 如权利要求1至4中任一项所述的方法,其特征在于,所述第一预测值是利用车道线预测算法的模型获得的,所述方法还包括:
    根据所述第二预测值对所述车道线预测算法的模型进行更新。
  6. 如权利要求1至5中任一项所述的方法,其特征在于,所述方法还包括:
    所述第一预测值是在当当前时刻与在先时刻的时间间隔在预设范围内时获得的。
  7. 如权利要求1至6中任一项所述的方法,其特征在于,所述方法还包括:
    获取路径规划信息,所述路径规划信息包括以下至少一种:道路信息、交通信息或自车信息;其中,所述道路信息包括以下至少一种:路障信息、道路的宽度信息或道路的长度信息;所述交通信息包括以下至少一种:红绿灯信息、交通规则信息、周围其他车辆的行驶信息或路况信息;所述自车信息包括以下至少一种:自车运动信息、位置信息、形态信息、结构信息;
    根据所述第二预测值和所述路径规划信息,生成下一时段或下一时刻的路径规划方案。
  8. 如权利要求7所述的方法,其特征在于,所述方法还包括:
    获取当前时刻的可行驶区域信息;
    根据所述第二预测值、所述路径规划信息和所述可行驶区域信息,确定下一时段或下一时刻的车道级的路径规划方案。
  9. 如权利要求1至6中任一项所述的方法,其特征在于,所述方法还包括:
    获取预警信息,所述预警信息包括以下至少一种:车辆的位置信息、交通信息、路障信息;
    根据所述第二预测值、所述道路信息和预设预警阈值,生成预警信号;
    根据所述预警信号生成预警策略规划信息,所述预警策略规划信息用于表示对所述预警信号的响应策略。
  10. 一种车道线跟踪的装置,其特征在于,包括:
    获取单元,用于获取第一预测值,所述第一预测值用于表示车体坐标系中的车道线模型,所述第一预测值是利用在先时刻的自车运动信息预测得到的;
    所述获取单元还用于,获取第一检测信息,所述第一检测信息包括当前时刻车道线在图像坐标系中的像素点;
    处理单元,用于根据所述第一预测值和所述第一检测信息,确定第一映射关系,所述第一映射关系用于表示所述图像坐标系与所述车体坐标系之间的实时映射关系;
    所述处理单元还用于,根据所述第一映射关系确定第二预测值,所述第二预测值用于表示所述第一预测值的修正值。
  11. 如权利要求10所述的装置,其特征在于,所述处理单元具体用于:
    根据所述第一映射关系和所述第一预测值,获得第三预测值,所述第三预测值用于表示所述第一预测值在所述第一映射关系下的对应值;
    利用所述第一检测信息对所述第三预测值进行调整,以获得所述第二预测值。
  12. 如权利要求10或11所述的装置,其特征在于,所述处理单元具体用于:
    根据所述第一预测值和初始单应性矩阵,获得第四预测值,所述初始单应性矩阵用于表示所述图像坐标系和所述车体坐标系之间的初始映射关系,所述第四预测值用于表示所述第一预测值在所述初始映射关系下的对应值;
    根据所述第一检测信息和所述第四预测值,确定实时的单应性矩阵,所述实时的单应性矩阵用于表示所述第一映射关系。
  13. 如权利要求12所述的装置,其特征在于,所述处理单元还用于:
    将车辆的前方按距离划分为多个区域;
    获取对应于所述多个区域的多个所述实时的单应性矩阵,其中,所述多个区域中的每个区域均分别对应至少一个所述实时的单应性矩阵。
  14. 如权利要求10至13中任一项所述的装置,其特征在于,所述第一预测值是利用车道线预测算法的模型获得的,所述处理单元还用于:根据所述第二预测值,对所述车道线预测算法的模型进行更新。
  15. 如权利要求10至14中任一项所述的装置,其特征在于,所述第一预测值是在当 当前时刻与在先时刻的时间间隔在预设范围内时获得的。
  16. 如权利要求10至15中任一项所述的装置,其特征在于,所述获取单元还用于:获取路径规划信息,所述路径规划信息包括以下至少一种:道路信息、交通信息或自车信息;其中,所述道路信息包括以下至少一种:路障信息、道路的宽度信息或道路的长度信息;所述交通信息包括以下至少一种:红绿灯信息、交通规则信息、周围其他车辆的行驶信息或路况信息;所述自车信息包括以下至少一种:自车运动信息、位置信息、形态信息、结构信息;
    所述处理单元还用于:根据所述第二预测值和所述路径规划信息,生成下一时段或下一时刻的路径规划方案。
  17. 如权利要求16所述的装置,其特征在于,所述获取单元还用于:获取当前时刻的可行驶区域信息;
    所述处理单元还用于:根据所述第二预测值、所述路径规划信息和所述可行驶区域信息,确定下一时段或下一时刻的车道级的路径规划方案。
  18. 如权利要求10至15中任一项所述的装置,其特征在于,所述获取单元还用于:获取预警信息,所述预警信息包括以下至少一种:车辆的位置信息、交通信息、路障信息;
    所述处理单元还用于:
    根据所述第二预测值、所述道路信息和预设预警阈值,生成预警信号;
    根据所述预警信号生成预警策略规划信息,所述预警策略规划信息用于表示对所述预警信号的响应策略。
  19. 一种芯片,其特征在于,所述芯片包括处理器与数据接口,所述处理器通过所述数据接口读取存储器上存储的指令,以执行如权利要求1至9中任一项所述的方法。
  20. 一种计算机可读存储介质,其特征在于,所述计算机可读介质存储用于设备执行的程序代码,该程序代码包括用于执行如权利要求1至9中任一项所述的方法的指令。
PCT/CN2020/087506 2020-04-28 2020-04-28 车道线跟踪方法和装置 WO2021217420A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202080006573.9A CN113168708B (zh) 2020-04-28 2020-04-28 车道线跟踪方法和装置
EP20933833.4A EP4141736A4 (en) 2020-04-28 2020-04-28 LANE KEEPING METHOD AND APPARATUS
PCT/CN2020/087506 WO2021217420A1 (zh) 2020-04-28 2020-04-28 车道线跟踪方法和装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/087506 WO2021217420A1 (zh) 2020-04-28 2020-04-28 车道线跟踪方法和装置

Publications (1)

Publication Number Publication Date
WO2021217420A1 true WO2021217420A1 (zh) 2021-11-04

Family

ID=76879301

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/087506 WO2021217420A1 (zh) 2020-04-28 2020-04-28 车道线跟踪方法和装置

Country Status (3)

Country Link
EP (1) EP4141736A4 (zh)
CN (1) CN113168708B (zh)
WO (1) WO2021217420A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114454888A (zh) * 2022-02-22 2022-05-10 福思(杭州)智能科技有限公司 一种车道线预测方法、装置、电子设备及车辆
CN114723785A (zh) * 2022-04-22 2022-07-08 禾多科技(北京)有限公司 车道线关键点跟踪方法、装置、设备和计算机可读介质
CN114750759A (zh) * 2022-04-19 2022-07-15 合众新能源汽车有限公司 一种跟车目标确定方法、装置、设备及介质
US20220236074A1 (en) * 2021-01-25 2022-07-28 Nio Technology (Anhui) Co., Ltd Method and device for building road model
CN115063762A (zh) * 2022-05-20 2022-09-16 广州文远知行科技有限公司 车道线的检测方法、装置、设备及存储介质
CN115782926A (zh) * 2022-12-29 2023-03-14 苏州市欧冶半导体有限公司 一种基于道路信息的车辆运动预测方法及装置
CN117636270A (zh) * 2024-01-23 2024-03-01 南京理工大学 基于单目摄像头的车辆抢道事件识别方法及设备

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113343956B (zh) * 2021-08-06 2021-11-19 腾讯科技(深圳)有限公司 路况信息的预测方法、装置和存储介质及电子设备
CN113682313B (zh) * 2021-08-11 2023-08-22 中汽创智科技有限公司 一种车道线确定方法、确定装置及存储介质
CN113739811B (zh) * 2021-09-03 2024-06-11 阿波罗智能技术(北京)有限公司 关键点检测模型的训练和高精地图车道线的生成方法设备
CN114743174A (zh) * 2022-03-21 2022-07-12 北京地平线机器人技术研发有限公司 观测车道线的确定方法、装置、电子设备和存储介质
CN114701568B (zh) * 2022-03-28 2023-04-25 广东皓行科技有限公司 车辆转向角修正方法、处理器及车辆
CN116872926A (zh) * 2023-08-16 2023-10-13 北京斯年智驾科技有限公司 一种自动驾驶车道保持方法、系统、装置及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971521A (zh) * 2014-05-19 2014-08-06 清华大学 道路交通异常事件实时检测方法及装置
CN106842231A (zh) * 2016-11-08 2017-06-13 长安大学 一种道路边界检测及跟踪方法
CN109409205A (zh) * 2018-09-07 2019-03-01 东南大学 基于线间距特征点聚类的航拍视频公路车道线检测方法
CN109948552A (zh) * 2019-03-20 2019-06-28 四川大学 一种复杂交通环境中的车道线检测的方法
WO2019224103A1 (en) * 2018-05-22 2019-11-28 Connaught Electronics Ltd. Lane detection based on lane models

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976464B (zh) * 2010-11-03 2013-07-31 北京航空航天大学 基于单应性矩阵的多平面动态的增强现实注册的方法
CN102358287A (zh) * 2011-09-05 2012-02-22 北京航空航天大学 一种用于车辆自动驾驶机器人的轨迹跟踪控制方法
JP6770393B2 (ja) * 2016-10-04 2020-10-14 株式会社豊田中央研究所 トラッキング装置及びプログラム
EP3865822A1 (en) * 2018-05-15 2021-08-18 Mobileye Vision Technologies Ltd. Systems and methods for autonomous vehicle navigation
CN110070025B (zh) * 2019-04-17 2023-03-31 上海交通大学 基于单目图像的三维目标检测系统及方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971521A (zh) * 2014-05-19 2014-08-06 清华大学 道路交通异常事件实时检测方法及装置
CN106842231A (zh) * 2016-11-08 2017-06-13 长安大学 一种道路边界检测及跟踪方法
WO2019224103A1 (en) * 2018-05-22 2019-11-28 Connaught Electronics Ltd. Lane detection based on lane models
CN109409205A (zh) * 2018-09-07 2019-03-01 东南大学 基于线间距特征点聚类的航拍视频公路车道线检测方法
CN109948552A (zh) * 2019-03-20 2019-06-28 四川大学 一种复杂交通环境中的车道线检测的方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4141736A4 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220236074A1 (en) * 2021-01-25 2022-07-28 Nio Technology (Anhui) Co., Ltd Method and device for building road model
CN114454888A (zh) * 2022-02-22 2022-05-10 福思(杭州)智能科技有限公司 一种车道线预测方法、装置、电子设备及车辆
CN114454888B (zh) * 2022-02-22 2023-10-13 福思(杭州)智能科技有限公司 一种车道线预测方法、装置、电子设备及车辆
CN114750759A (zh) * 2022-04-19 2022-07-15 合众新能源汽车有限公司 一种跟车目标确定方法、装置、设备及介质
CN114750759B (zh) * 2022-04-19 2024-04-30 合众新能源汽车股份有限公司 一种跟车目标确定方法、装置、设备及介质
CN114723785A (zh) * 2022-04-22 2022-07-08 禾多科技(北京)有限公司 车道线关键点跟踪方法、装置、设备和计算机可读介质
CN115063762A (zh) * 2022-05-20 2022-09-16 广州文远知行科技有限公司 车道线的检测方法、装置、设备及存储介质
CN115782926A (zh) * 2022-12-29 2023-03-14 苏州市欧冶半导体有限公司 一种基于道路信息的车辆运动预测方法及装置
CN115782926B (zh) * 2022-12-29 2023-12-22 苏州市欧冶半导体有限公司 一种基于道路信息的车辆运动预测方法及装置
CN117636270A (zh) * 2024-01-23 2024-03-01 南京理工大学 基于单目摄像头的车辆抢道事件识别方法及设备
CN117636270B (zh) * 2024-01-23 2024-04-09 南京理工大学 基于单目摄像头的车辆抢道事件识别方法及设备

Also Published As

Publication number Publication date
CN113168708B (zh) 2022-07-12
EP4141736A4 (en) 2023-06-21
CN113168708A (zh) 2021-07-23
EP4141736A1 (en) 2023-03-01

Similar Documents

Publication Publication Date Title
WO2021217420A1 (zh) 车道线跟踪方法和装置
WO2021027568A1 (zh) 障碍物避让方法及装置
US11545033B2 (en) Evaluation framework for predicted trajectories in autonomous driving vehicle traffic prediction
WO2022001773A1 (zh) 轨迹预测方法及装置
CN112639883B (zh) 一种相对位姿标定方法及相关装置
WO2021102955A1 (zh) 车辆的路径规划方法以及车辆的路径规划装置
WO2021212379A1 (zh) 车道线检测方法及装置
WO2022104774A1 (zh) 目标检测方法和装置
CN112534483B (zh) 预测车辆驶出口的方法和装置
CN113160547B (zh) 一种自动驾驶方法及相关设备
CN112512887B (zh) 一种行驶决策选择方法以及装置
CN113498529B (zh) 一种目标跟踪方法及其装置
WO2022142839A1 (zh) 一种图像处理方法、装置以及智能汽车
CN114754780A (zh) 车道线规划方法及相关装置
US20230048680A1 (en) Method and apparatus for passing through barrier gate crossbar by vehicle
US20240017719A1 (en) Mapping method and apparatus, vehicle, readable storage medium, and chip
US20220309806A1 (en) Road structure detection method and apparatus
CN112810603B (zh) 定位方法和相关产品
CN114792149A (zh) 一种轨迹预测方法、装置及地图
WO2022178858A1 (zh) 一种车辆行驶意图预测方法、装置、终端及存储介质
CN114445490A (zh) 一种位姿确定方法及其相关设备
CN114764980B (zh) 一种车辆转弯路线规划方法及装置
WO2021159397A1 (zh) 车辆可行驶区域的检测方法以及检测装置
CN111655561A (zh) 无需地图和定位的自动驾驶车辆的拐角协商方法
WO2022061725A1 (zh) 交通元素的观测方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20933833

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020933833

Country of ref document: EP

Effective date: 20221123