CN116091556A - Road edge tracking method, device, equipment and storage medium - Google Patents

Road edge tracking method, device, equipment and storage medium Download PDF

Info

Publication number
CN116091556A
CN116091556A CN202211637326.6A CN202211637326A CN116091556A CN 116091556 A CN116091556 A CN 116091556A CN 202211637326 A CN202211637326 A CN 202211637326A CN 116091556 A CN116091556 A CN 116091556A
Authority
CN
China
Prior art keywords
point set
road edge
road
edge point
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211637326.6A
Other languages
Chinese (zh)
Inventor
熊驰
陈宏峰
华智
方伟业
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Zero Run Technology Co Ltd
Original Assignee
Zhejiang Zero Run Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Zero Run Technology Co Ltd filed Critical Zhejiang Zero Run Technology Co Ltd
Priority to CN202211637326.6A priority Critical patent/CN116091556A/en
Publication of CN116091556A publication Critical patent/CN116091556A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses a method, a device, equipment and a storage medium for tracking a road edge. The road edge tracking method comprises the steps of obtaining a predicted road edge point set and a detected road edge point set at the current moment; grouping the detection path edge point sets based on the prediction path edge point sets to obtain grouping results of the detection path edge point sets; acquiring a road edge identification result of whether a road where the vehicle runs at the current moment has incomplete road edges or not; based on the road edge identification result and the grouping result of the detection road edge point set, carrying out fusion processing on the prediction road edge point set and the detection road edge point set to obtain a fusion road edge point set; and performing curve fitting on the fusion curbside point set to obtain a curbside fitting result at the current moment. By the method, the predicted road edge point set and the detected road edge point set can be fused according to whether the identified detected road edge points are transformed or not and whether the road edges are blocked or not, so that the road edges in the complex road conditions can be stably and accurately tracked, and the robustness is good.

Description

Road edge tracking method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of traffic technologies, and in particular, to a method, an apparatus, a device, and a storage medium for tracking a road edge.
Background
The perception of the surrounding environment in the running process of the vehicle is the basis for realizing intelligent auxiliary driving and unmanned driving of the vehicle, and the road edge detection technology is an important link for realizing intelligent path planning and decision control of the vehicle and is also the basis for realizing auxiliary driving such as lane keeping assistance (Lane Keeping Assist, LKA), lane departure early warning (Lane Departure Warning, LDW) and the like.
The existing road edge detection technology generally obtains a current road scene picture of a road where a vehicle is located, and detects a road edge from the current road scene picture. However, with such a road edge detection strategy, no adjustment can be made according to the real-time road edge condition, resulting in lower accuracy of road edge detection.
Disclosure of Invention
In order to solve the technical problems in the prior art, the application provides a method, a device, equipment and a storage medium for tracking a path edge.
In order to solve the above problems, the present application provides a method for tracing a road edge, the method for tracing a road edge comprising: acquiring a predicted road edge point set and a detected road edge point set at the current moment; grouping the detection path edge point set based on the prediction path edge point set to obtain a grouping result of the detection path edge point set; acquiring a road edge identification result of whether a road where the vehicle runs at the current moment has incomplete road edges or not; based on the road edge identification result and the grouping result of the detection road edge point set, carrying out fusion processing on the prediction road edge point set and the detection road edge point set to obtain a fusion road edge point set; and performing curve fitting on the fusion road edge point set to obtain a road edge fitting result at the current moment.
To solve the above problems, the present application provides a road edge fitting device, the road edge fitting device includes: the device comprises an acquisition module, a grouping module, a fusion module and a fitting module; the acquisition module is used for acquiring a predicted road edge point set and a detected road edge point set at the current moment; the grouping module is used for grouping the detection path edge point set based on the prediction path edge point set to obtain a grouping result of the detection path edge point set; the acquisition module is used for acquiring whether a road which is driven by the vehicle at the current moment has a road edge identification result with incomplete road edges or not; the fusion module is used for carrying out fusion processing on the predicted path edge point set and the detected path edge point set based on the path edge identification result and the grouping result of the detected path edge point set to obtain a fusion path edge point set; and the fitting module is used for performing curve fitting on the fusion road edge point set to obtain a road edge fitting result at the current moment.
To solve the above-mentioned problem, the present application provides a road edge fitting device, the road edge fitting device includes: a processor and a memory, the memory storing a computer program, the processor being configured to execute the computer program to implement the method described above.
To solve the above-described problems, the present application provides a computer-readable storage medium having stored thereon program instructions that, when executed by a processor, implement the above-described method.
Compared with the prior art, the path-edge tracking method comprises the following steps: acquiring a predicted road edge point set and a detected road edge point set at the current moment; grouping the detection path edge point sets based on the prediction path edge point sets to obtain grouping results of the detection path edge point sets; acquiring a road edge identification result of whether a road where the vehicle runs at the current moment has incomplete road edges or not; based on the road edge identification result and the grouping result of the detection road edge point set, carrying out fusion processing on the prediction road edge point set and the detection road edge point set to obtain a fusion road edge point set; and performing curve fitting on the fusion curbside point set to obtain a curbside fitting result at the current moment. According to the embodiment, the road edge identification result and the grouping result of the detection road edge point set are referred to simultaneously, so that fusion processing is carried out on the prediction road edge point set and the detection road edge point set, and then curve fitting is carried out on the fusion road edge points, so that the prediction road edge point set and the detection road edge point set can be fused according to whether the identified detection road edge points are transformed or not and whether the road edge is blocked or not, the road edge in complex road conditions can be stably and accurately tracked, and the robustness is good.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an embodiment of a method for path-edge tracking provided herein;
FIG. 2 is a flowchart of step S102 in FIG. 1;
FIG. 3 is a flowchart of an embodiment of obtaining a predicted set of waypoints at a current time provided in the present application;
FIG. 4 is a schematic structural diagram of an embodiment of a road edge fitting device provided in the present application;
FIG. 5 is a schematic structural diagram of an embodiment of a road edge fitting apparatus provided herein;
fig. 6 is a schematic structural diagram of an embodiment of a computer storage medium provided in the present application.
Detailed Description
The present application is described in further detail below with reference to the drawings and examples. It is specifically noted that the following examples are only for illustration of the present application, but do not limit the scope of the present application. Likewise, the following embodiments are only some, but not all, of the embodiments of the present application, and all other embodiments obtained by one of ordinary skill in the art without inventive effort are within the scope of the present application.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In the description of the present application, it should be noted that, unless explicitly stated and limited otherwise, the terms "mounted," "disposed," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; the connection can be mechanical connection or electric connection; may be directly connected or may be connected via an intermediate medium. It will be apparent to those skilled in the art that the foregoing is in the specific sense of this application.
The perception of the surrounding environment in the running process of the vehicle is the basis for realizing intelligent auxiliary driving and unmanned driving of the vehicle, and the road edge detection technology is an important link for realizing intelligent path planning and decision control of the vehicle and is also the basis for realizing auxiliary driving such as lane keeping assistance (Lane Keeping Assist, LKA), lane departure early warning (Lane Departure Warning, LDW) and the like.
The existing road edge tracking method of complex road conditions probably adopts a single strategy of fusing detection results and prediction results, and when the road is blocked or lost, the road edge form and position are difficult to accurately track. For example, some roads have the conditions of vehicle congestion and road edge shielding, the laser radar cannot detect the road edge points of the shielded part, the traditional road edge tracking method adopts a strategy of fusing the single detection result and the prediction result, adjustment cannot be made according to the real-time road edge condition, and the road edge shape and position are difficult to accurately track; the situation of missing part branching edges occurs in some roads, such as crossroads, three-way intersections and the like, and likewise, the missing part branching edge points can cause errors to the road edge fitting, the traditional road edge tracking method adopts a single fusion detection and prediction strategy, the tracking strategy cannot be adjusted according to the real-time road edge condition, and the road edge shape and position are difficult to accurately track; some roads comprise a plurality of road lines, such as ramps, viaduct entrances and exits and the like, and the traditional road edge tracking method does not have a matching rule aiming at road edge points, so that the problem of jump of a tracking result can occur, and the output of an actual road edge is influenced.
In order to solve a series of technical problems in the prior art, the present application provides a method for tracing a road edge, referring to fig. 1, fig. 1 is a schematic flow chart of an embodiment of the method for tracing a road edge provided in the present application, and specifically includes the following steps S101 to S105.
Step S101: and acquiring a predicted road edge point set and a detected road edge point set at the current moment.
In order to accurately identify the actual condition of the road edge in the normal running process of the vehicle on the road with the road edge, the predicted road edge point set and the detected road edge point set at the current moment can be obtained so as to process the output road edge condition through the predicted road edge point set and the detected road edge point set. The predicted set of road edge points may be obtained by calculating the related data according to a specific algorithm to predict the road edge condition of the vehicle at the current moment, and for example, the predicted set of road edge points may be obtained by processing the outputted road edge condition at the previous moment according to the running state of the vehicle. The detecting the road edge point set can be that the vehicle acquires the information of the road edge in real time in the running process, then analyzes and processes the information of the road edge to obtain, for example, an image acquisition device can acquire the picture information of the road edge in real time, and then processes the picture information to obtain the detecting road edge point set; or the original point cloud can be detected through a laser radar technology, then the original point cloud is processed to obtain a detection route edge point set, for example, a laser device can be mounted on a vehicle, pulse laser emitted by the laser device strikes surrounding trees, roads, automobiles, pedestrians and the like to cause scattering, and a part of light waves can be reflected to a receiver of the laser radar. According to the laser ranging principle, the distance from the laser radar to the target point can be obtained, the pulse laser continuously scans the target object, the data of all the target points on the target object can be obtained, the data is used for imaging processing, then an accurate three-dimensional image can be obtained, then the image is analyzed, and finally the detection route edge point set is obtained.
Step S102: and carrying out grouping processing on the detection path edge point set based on the prediction path edge point set to obtain a grouping result of the detection path edge point set.
The detected road edge point is usually a scattered point without road edge attribute characteristics, the predicted road edge point set at the current moment is calculated according to a related algorithm, so that the predicted road edge point set is accurate under the condition that the road edge does not change greatly, the detected road edge point set can be divided into groups by using the predicted road edge point set, the predicted road edge point set usually comprises a left road edge predicted point set and a right road edge predicted point set, the detected road edge point and the predicted road edge point can be compared, and then the detected road edge point set is divided into a left road edge detected point set, a right road edge detected point set and a newly generated road edge detected point set, and finally the grouping result of the detected road edge point set comprises whether the newly generated road edge point set exists or not.
Step S103: and acquiring a road edge identification result of whether the road which is travelled by the vehicle at the current moment has insufficient road edges.
In the normal running process of the vehicle on the road, whether the road on which the vehicle runs at the current moment has a road edge recognition result with incomplete road edges can be finally determined by acquiring an image of the periphery of the vehicle and analyzing the image. For example, the vehicle may be configured with a camera sensor, and the camera sensor may acquire the road edge images on both sides of the vehicle in real time, and then input the road edge images into the neural network model which has been trained, and the neural network model outputs a road edge recognition result whether the road edge is blocked or whether the road edge is missing, and when the road edge is blocked or missing, outputs a recognition result of the road edge insufficiency, and otherwise outputs a recognition result that the road edge insufficiency does not exist. The neural network model may be an afflicientnet or a Resnet network model, and in other embodiments, the neural network model may also be other network models, as long as it can output whether there is a road edge recognition result of the road with incomplete road edges.
Step S104: and based on the road edge identification result and the grouping result of the detection road edge point set, carrying out fusion processing on the prediction road edge point set and the detection road edge point set to obtain a fusion road edge point set.
The road edge recognition result comprises a recognition result of road edge insufficiency and a recognition result of road edge insufficiency, and the grouping result of the detection road edge point set comprises two grouping results of a new road edge point set and a new road edge point set. And when the current path edge identification result and the grouping result of the detection path edge point set are determined, the predicted path edge point set and the detection path edge point set can be subjected to fusion processing to obtain a fusion path edge point set. The merging processing of the predicted path edge point set and the detected path edge point set may include outputting only the predicted path edge point set or outputting only the detected path edge point set as a merged path edge point, or setting respective weights of the predicted path edge point set and the detected path edge point set, and then merging the detected path edge point set of the predicted path edge point set by using a merging algorithm to finally obtain a merged path edge point set.
Step S105: and performing curve fitting on the fusion curbside point set to obtain a curbside fitting result at the current moment.
After the fusion road edge points are obtained, curve fitting can be carried out on the fusion road edge points based on a curve fitting algorithm, and finally the road edge fitting result at the current moment is obtained. For example, the fusion road edge points can be subjected to curve fitting through a RANSAC algorithm, a least squares method or other curve fitting algorithms, and when a fitted curve model does not meet the requirements, the fusion road edge point set can be subjected to multiple fitting in the same way, so that a road edge fitting result is finally obtained.
According to the embodiment, the road edge identification result and the grouping result of the detection road edge point set are referred to simultaneously, so that fusion processing is carried out on the prediction road edge point set and the detection road edge point set, and then curve fitting is carried out on the fusion road edge points, so that the prediction road edge point set and the detection road edge point set can be fused according to whether the identified detection road edge points are transformed or not and whether the road edge is blocked or not, the road edge in complex road conditions can be stably and accurately tracked, and the robustness is good.
In one embodiment, based on the result of the path edge identification and the grouping result of the detected path edge point set, performing the fusion processing on the predicted path edge point set and the detected path edge point set (step S104) includes: determining respective fusion weights of the predicted path edge point set and the detected path edge point set based on the path edge identification result and the grouping result of the detected path edge point set; and carrying out fusion processing on the predicted path edge point set and the detected path edge point set according to the respective fusion weights.
Specifically, the fusion processing can be performed on the predicted path edge point set and the detected path edge point set according to the following calculation mode:
f(x)=Eg(x)+Fc(x)
wherein g (x) represents the prediction result of the prediction path edge point set, c (x) represents the detection result of the detection path edge point set, E represents the prediction weight coefficient of the prediction path edge point set, F represents the detection weight coefficient of the detection path edge point set, and F (x) represents the fusion result of the fusion prediction path edge point set and the detection path edge point set.
The weight coefficient of the predicted path edge point set and the weight coefficient of the detected path edge point set are determined through the grouping result and the path edge identification result, so that the fusion result is more in line with the actual situation; and otherwise, setting the value of the prediction weight coefficient to be smaller than the value of the detection weight coefficient. Specifically, see the following examples:
in one embodiment, determining the respective fusion weights of the predicted and detected sets of curbside points based on the result of the curbside identification and the result of the grouping of the detected set of curbside points includes: if the grouping result of the detection path edge point set does not have a new path edge point set and the path edge recognition result does not have path edge insufficiency, determining that the difference value of the fusion weight of the prediction path edge point set and the fusion weight of the detection path edge point set is smaller than or equal to a preset threshold value.
The current road edge is indicated to have no shielding or missing condition, and meanwhile, the fact that the vehicle can continue to run on the current road and cannot enter the new road is detected. In this embodiment, when determining the importance function, it is determined that the predicted result and the detected result may be believed at the same time, so that a difference between the fusion weight of the predicted curbside point set and the fusion weight of the detected curbside point set may be set to be less than or equal to a preset threshold, thereby fusing the predicted curbside point set and the detected curbside point set at the same time. For example, the weight coefficient of the predicted path edge point set and the weight coefficient of the detected path edge point set are set to 0.5, so that the difference between the two is 0, and the predicted path edge point set and the detected path edge point set are fused to obtain a final fusion result.
In one embodiment, determining the respective fusion weights of the predicted and detected sets of curbside points based on the result of the curbside identification and the result of the grouping of the detected set of curbside points includes: if the grouping result of the detection path edge point set does not have a new path edge point set and the path edge recognition result does not have a path edge insufficiency, determining that the fusion weight of the prediction path edge point set is greater than that of the detection path edge point set.
Indicating that the current road edge is at least partially blocked or at least partially missing, the blocked or missing part of the road edge cannot be detected, and meanwhile, the vehicle is detected to continue to run on the current road and cannot enter the new road. For this case, conventional tracking algorithms would fuse the results of detection and prediction, but the output path edge would deviate from the actual path edge. In this embodiment, because the road is missing or blocked, the blocked or missing part of the road edge cannot be detected, resulting in that the detected road edge point set is not consistent with the actual situation, and the prediction result is judged to be more believable when the importance function is determined, so that the fusion weight of the predicted road edge point set can be set to be greater than the fusion weight of the detected road edge point set, for example, the weight coefficient of the predicted road edge point set is set to be 1, and the weight coefficient of the detected road edge point set is set to be 0, thereby directly regarding the predicted road edge point set as the final fusion result.
In one embodiment, determining the respective fusion weights of the predicted and detected sets of curbside points based on the result of the curbside identification and the result of the grouping of the detected set of curbside points includes: if the new road edge point set exists in the grouping result of the detection road edge point set and the road edge insufficiency does not exist in the road edge identification result, determining that the fusion weight of the prediction road edge point set is smaller than that of the detection road edge point set.
Indicating that the current road edge is not blocked or missing, and detecting that the vehicle does not continue to run on the current road but enters the new road. In this embodiment, since the predicted set of edge points is usually the edge point when the predicted vehicle continues to travel on the original road, when the vehicle needs to enter the new road, the predicted set of edge points does not match the actual situation, and then it is determined that the detected result should be more believed when determining the importance function, so that the fusion weight of the detected set of edge points may be set to be greater than the fusion weight of the predicted set of edge points, for example, the weight coefficient of the detected set of edge points is set to be 1, and the weight coefficient of the predicted set of edge points is set to be 0, thereby directly regarding the detected set of edge points as the final fusion result.
In one embodiment, determining the respective fusion weights of the predicted and detected sets of curbside points based on the result of the curbside identification and the result of the grouping of the detected set of curbside points includes: if the grouping result of the detection path edge point set includes a new path edge point set and the path edge recognition result includes a path edge insufficiency, determining that the fusion weight of the prediction path edge point set is smaller than that of the detection path edge point set.
Indicating that the current road edge is at least partially blocked or at least partially missing, the blocked or missing part of the road edge cannot be detected, and the existence of a new road edge point set is also detected. In this embodiment, because the road is missing or blocked, the blocked or missing part of the road edge cannot be detected, resulting in that the detected road edge point set is inconsistent with the actual situation, and the detection result can be judged to be more believable when the importance function is determined, so that the fusion weight of the predicted road edge point set can be set smaller than that of the detected road edge point set, for example, the weight coefficient of the predicted road edge point set is set to 0, and the weight coefficient of the detected road edge point set is set to 1, thereby directly regarding the detected road edge point set as the final fusion result.
In the above embodiment, it is mainly described that the predicted and detected edge point sets may be fused according to the grouping result of the detected edge point set, so how to group the detected edge point set is important, see fig. 2, and fig. 2 is a schematic flow chart of an embodiment of step S102 in fig. 1, specifically, the method includes the following steps S201 to S203.
Step S201: selecting a target road edge point with a distance value which is not calculated from the detection road edge point set, calculating the distance between the target road edge point and a left road edge prediction point set in the prediction road edge point set to obtain a left road edge distance value, calculating the distance between the target road edge point and a right road edge point set in the prediction road edge point set to obtain a right road edge distance value, and repeating the current steps until all the detection road edge points are traversed.
More detection route edge points exist in the detection route edge point set, and the more detection route edge points are required to be subjected to grouping processing so as to determine that some detection route edge points are left route edge points of a current road, some detection route edge points are right route edge points of the current road, and some detection route edge points are new route edge points. In this embodiment, a point with a distance value not calculated may be selected from a plurality of detected edge points as a target edge point, and the target edge point and the left edge predicted point are calculated to obtain a left edge distance value, and at the same time, the distance is calculated from the right edge point set to obtain a right edge distance value, so as to determine whether the target edge point belongs to a point on the left edge or a point on the right edge according to the size of the distance value. After obtaining the left and right edge distance values of one detection edge point, the left and right edge distance values of the other detection edge point can be calculated in the same way to finally obtain the left and right edge distance values of each detection edge point.
Specifically, calculating the distance value of the detected road edge point may specifically include: calculating distance values of all left road edge predicted points in the selected target road edge points and the left road edge predicted points, and taking an average value of all the distance values as a left road edge distance value; and calculating distance values from all right road edge predicted points in the selected target road edge point and the right road edge predicted point set, and calculating an average value of all the distance values to serve as a right road edge distance value. The method comprises the steps of calculating Euclidean distance between a target road edge point and each left road edge prediction point, summing the calculated Euclidean distances, calculating an average value, taking the average value as a left road edge distance value, and calculating a right road edge distance value in the same manner. And repeatedly executing the same steps until all the detection route edge points are traversed, and obtaining a left route edge distance value and a right route edge distance value of each detection route edge point.
Step S202: and comparing one of the left road edge distance value and the right road edge distance value of the detection road edge point, which is smaller in value, with a preset distance threshold.
After obtaining the left edge distance value and the right edge distance value of each detection edge point, the detection edge points need to be grouped according to the distance values. In this embodiment, one of the left edge distance value and the right edge distance value of all the detected edge points, which has a smaller value, needs to be selected to be compared with a preset distance threshold, where the preset distance threshold may be set according to the actual situation. In an exemplary embodiment, a target edge point is selected from a plurality of sets of detected edge points, for example, a value of a left edge distance value of the target edge point is greater than a value of a right edge distance value, then the right edge distance value is compared with a preset distance to obtain a comparison result, then another detected edge point which is not compared with a preset distance threshold is selected from the set of detected edge points, and a smaller value of the left edge distance value and the right edge distance value of the selected detected edge point is selected to be compared with the preset distance threshold to obtain a comparison result.
Step S203: and determining whether a new road edge point set exists in the grouping result of the detection road edge point set based on the distance comparison result.
After a comparison result of each detection route edge point and a preset distance threshold is obtained, each detection route edge point can be grouped, specifically, when the comparison result is smaller than or equal to the preset distance threshold, the detection route edge point can be defined as the route edge point of the original road, specifically, the detection route edge point can also be determined as the left route edge point or the right route edge point of the road, and when the right route edge distance value of the detection route edge point is compared with the preset distance threshold, the detection route edge point is the point on the right route edge; and when the left road edge distance value of the selected detection road edge point is compared with the preset distance threshold value, the detection road edge point is a point on the left road edge. And when the comparison result is larger than the preset distance threshold value, the detection route edge point can be defined as the route edge point to be defined, and when the number of the route edge points to be defined is large and the number of the route edge points to be defined is enough to form a road, the existence of a new route edge point set can be determined.
Specifically, the step of determining whether there is a new set of road edge points in the grouping result of the set of detection road edge points based on the distance comparison result (step S203) includes:
If the number of the to-be-determined road edge points with the distance value larger than the preset distance threshold value in the detected road edge points is larger than the preset number threshold value, comparing the minimum value of the ordinate values of all the to-be-determined road edge points under the vehicle coordinate system with the preset ordinate threshold value, if the minimum value of the ordinate values of all the to-be-determined road edge points is smaller than the preset ordinate threshold value, determining that a new road edge point set exists in the grouping result of the detected road edge point set, otherwise, determining that the new road edge point set does not exist in the grouping result of the detected road edge point set.
The preset number threshold may be set according to practical situations, and is not limited herein. In the present embodiment, the vehicle coordinate system may be a coordinate system established with the vehicle as an origin, specifically, a direction in which the vehicle advances may be defined as a positive direction of an x-axis in the vehicle coordinate system, and a left direction of the vehicle may be defined as a positive direction of a y-axis in the vehicle coordinate system. When the number of the to-be-determined road edge points with the distance value larger than the preset distance threshold value in the detected road edge points is larger than the preset number threshold value, the coordinate values of all the to-be-determined road edge points under the vehicle coordinate system can be recorded, when the ordinate values of all the to-be-determined road edge points are smaller than the preset ordinate threshold value, all the to-be-determined road edge points are defined as a new road edge point set, the new road edge point set can be defined as a left road edge point set or a right road edge point set of the new road edge, then the road edge on the other side of the vehicle in the original road edge point set is defined as the left road edge or the right road edge of the new road edge, and when the new road edge is positioned on the left side of the vehicle, the new road edge point set can be used as the left road edge of the new road edge, the right road edge point set of the original road is used as the right road edge point set of the new road edge, and finally the new road edge is formed by fitting the new road edge point set and the right road edge point set of the original road edge point set. And when the minimum value of the ordinate values in all the undetermined route edge points is not satisfied and is smaller than a preset ordinate threshold value, determining that a new route edge point set does not exist.
In the above embodiment, the steps include: the minimum value of the ordinate values of all the to-be-determined road edge points under the vehicle coordinate system is compared with a preset ordinate threshold value, and the comparison is performed on the basis that the vehicle coordinate system takes the vehicle as an origin, the advancing direction of the vehicle is defined as the positive direction of the x axis under the vehicle coordinate system, and the left side of the vehicle is defined as the positive direction of the y axis under the vehicle coordinate system. In other embodiments, when the vehicle coordinate system definition is different, in the step: comparing the minimum value of the ordinate values of all the to-be-determined road edge points in the vehicle coordinate system with the preset ordinate threshold value, and changing the minimum value to the preset ordinate threshold value, wherein, for example, when the coordinate system is established by taking the vehicle as an origin, the forward direction of the vehicle is defined as the positive direction of the y axis in the vehicle coordinate system, the left direction of the vehicle is defined as the positive direction of the x axis in the vehicle coordinate system, and the steps can be changed as follows: and comparing the minimum value of the abscissa values of all the to-be-determined road edge points under the vehicle coordinate system with a preset abscissa threshold value, if the minimum value of the abscissa values of all the to-be-determined road edge points is smaller than the preset ordinate threshold value, determining that a new road edge point set exists in the grouping result of the detection road edge point set, otherwise, determining that the new road edge point set does not exist in the grouping result of the detection road edge point set.
In the above embodiment, it is mainly described that the grouping of the detected route edge point set can be determined according to the predicted route edge point set, so how to determine the predicted route edge point set is important, referring to fig. 3, fig. 3 is a schematic flowchart of an embodiment of obtaining the predicted route edge point set at the current time, which specifically includes the following steps S301 to S302.
Step S301: the lateral displacement, the longitudinal displacement, and the rotation angle of the vehicle from the previous time to the current time are determined based on the vehicle motion information of the previous time.
Each of the two consecutive times of the previous time and the current time may be spaced in units of seconds, and for example, the previous time is 19 points 30 minutes 29 seconds, and then the current time may be 19 points 30 minutes 30 seconds, and of course, the specific time interval between the previous time and the current time may also be other values, which may not be limited herein. The vehicle motion information may include a motion speed of the vehicle, rotation angle information of the vehicle, and the like, so that a lateral displacement and a longitudinal displacement of the vehicle from a previous time to a current time may be calculated according to the following formula:
Figure BDA0004006855890000121
Figure BDA0004006855890000122
where Δx and Δy denote the longitudinal displacement and the lateral displacement of the vehicle from time k-1 to time k (on the premise that the front of the vehicle is in the positive x-axis direction and the left is in the positive y-axis direction), v denotes the vehicle speed at time k-1, T (k-1, k) denotes the time elapsed from time k-1 to time k, and θ denotes the rotation angle of the vehicle around the vertical direction, respectively.
Step S302: and determining a predicted road edge point set at the current moment based on the transverse displacement, the longitudinal displacement, the rotation angle, the translation transformation matrix of the vehicle coordinate system, the rotation transformation matrix of the vehicle coordinate system and the coordinate value of the fusion road edge point at the previous moment under the vehicle coordinate system.
Before the step is executed, a translation transformation matrix of a vehicle coordinate system of the vehicle and a rotation transformation matrix of the vehicle coordinate system can be acquired, coordinate values of the fusion road edge points at the previous moment under the vehicle coordinate system are obtained by adopting the same mode of any one embodiment, and the calculated fusion road edge point set is obtained. And then, integrating the coordinate value of the fusion road edge point at the previous moment under the vehicle coordinate system, and calculating the rotation angle, the transverse displacement and the longitudinal displacement obtained in the previous step to obtain the predicted road edge point set at the current moment. Specifically, the set of predicted waypoints at the current time may be calculated according to the following formula:
Figure BDA0004006855890000131
Figure BDA0004006855890000132
Figure BDA0004006855890000133
wherein x is k 、y k Respectively representing the ordinate and the abscissa of the road edge point at the moment k under the vehicle coordinate system; a represents a rotation transformation matrix of a vehicle coordinate system; b represents a translation transformation matrix of a vehicle coordinate system; Δx and Δy represent the longitudinal displacement and the lateral displacement of the vehicle from time k-1 to time k, respectively, and θ represents the rotation angle of the vehicle around the vertical direction.
In one embodiment, the step of performing curve fitting on the fused set of cursors to obtain a cursors fitting result at the current time (step S105) includes: selecting a plurality of fitting road edge points from the fusion road edge points; performing curve fitting on a plurality of fitting road edge points to obtain a fitting curve model; matching other fusion road edge points with the fitted curve model; if the number of the fusion road edge points with the matching degree within the preset range is greater than or equal to a preset number threshold value, taking the fitted curve model as a road edge fitting result; if the number of the fusion path edge points with the matching degree within the preset range is smaller than the preset number threshold, returning to the step of selecting a plurality of fitting path edge points from the fusion path edge points.
The fusion road edge point set can comprise a fusion left road edge point set and a fusion right road edge point set, curve fitting can be performed on the fusion left road edge point set respectively, and then curve fitting is performed on the fusion right road edge point set. Taking a fitting fusion left road edge point set as an example, selecting a plurality of road edge points needing curve fitting from the fusion left road edge point set, then matching other fusion left road edge point sets with a fitting curve model to verify whether the fitting curve model meets the requirements, specifically, when the matching degree of other fusion left road edge points and the fitting curve model is more in the preset range, the fitting curve model can be considered to meet the requirements, and only the fitting curve model is required to be output as a road edge fitting result; when the matching degree of other fusion left road edge points and the fitting curve model is less in the preset range, the fitting curve model can be considered to be unsatisfactory, at the moment, a plurality of road edge points can be selected again from the fusion left road edge point set to fit a new fitting curve model, and the steps can be repeatedly executed until the fitting curve model meeting the requirements is obtained.
According to the embodiment, the road edge identification result and the grouping result of the detection road edge point set are referred to simultaneously, so that fusion processing is carried out on the prediction road edge point set and the detection road edge point set, and then curve fitting is carried out on the fusion road edge points, so that the prediction road edge point set and the detection road edge point set can be fused according to whether the identified detection road edge points are transformed or not and whether the road edge is blocked or not, the road edge in complex road conditions can be stably and accurately tracked, and the robustness is good.
The path edge tracking method in the embodiment can be applied to a path edge fitting device, and the path edge fitting device can be a server, mobile equipment or a system formed by mutually matching the server and the mobile equipment. Accordingly, each part included in the mobile device, for example, each unit, sub-unit, module, and sub-module, may be all disposed in the server, may be all disposed in the mobile device, and may also be disposed in the server and the mobile device, respectively.
Further, the server may be hardware or software. When the server is hardware, the server may be implemented as a distributed server cluster formed by a plurality of servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules, for example, software or software modules for providing a distributed server, or may be implemented as a single software or software module, which is not specifically limited herein.
In order to realize the path-edge tracking method in the above embodiment, the application provides a path-edge fitting device. Referring to fig. 4, fig. 4 is a schematic structural diagram of an embodiment of a road edge fitting device provided in the present application.
Specifically, the road edge fitting device 40 may include an acquisition module 41, a grouping module 42, a fusion module 43, and a fitting module 44.
The obtaining module 41 is configured to obtain a predicted set of curbside points and a detected set of curbside points at the current time.
The grouping module 42 is configured to perform grouping processing on the detected edge point set based on the predicted edge point set, so as to obtain a grouping result of the detected edge point set.
The obtaining module 41 is configured to obtain a result of identifying whether there is an incomplete road edge on a road traveled by the vehicle at the current moment.
The fusion module 43 is configured to perform fusion processing on the predicted and detected edge point sets based on the edge recognition result and the grouping result of the detected edge point set, so as to obtain a fused edge point set.
The fitting module 44 is configured to perform curve fitting on the fused curbside point set, so as to obtain a curbside fitting result at the current moment.
In one embodiment of the present application, each module in the path-edge fitting apparatus 40 shown in fig. 4 may be combined into one or several units separately or all, or some (some) of the units may be further split into a plurality of sub-units with smaller functions, so that the same operation may be implemented without affecting the implementation of the technical effects of the embodiments of the present application. The above modules are divided based on logic functions, and in practical applications, the functions of one module may be implemented by a plurality of units, or the functions of a plurality of modules may be implemented by one unit. In other embodiments of the present application, the road edge fitting device 40 may also include other units, and in practical applications, these functions may also be implemented with assistance by other units, and may be implemented by cooperation of multiple units.
The method is applied to the road edge fitting equipment. Referring specifically to fig. 5, fig. 5 is a schematic structural diagram of an embodiment of a road edge fitting apparatus provided in the present application, where the road edge fitting apparatus 50 includes a processor 51 and a memory 52. The memory 52 stores therein a computer program, and the processor 51 is configured to execute the computer program to implement the above-mentioned path-edge tracking method.
The processor 51 may be an integrated circuit chip, and has signal processing capability. Processor 51 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
For the above-mentioned embodiment of the path-edge tracking method, which may be presented in the form of a computer program, the present application proposes a computer storage medium carrying the computer program, please refer to fig. 6, fig. 6 is a schematic structural diagram of an embodiment of the computer storage medium provided in the present application, and the computer storage medium 60 of the present embodiment includes a computer program 61, which may be executed to implement the above-mentioned path-edge tracking method.
The computer storage medium 60 of this embodiment may be a medium that may store program instructions, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disc, or may be a server that stores the program instructions, and the server may send the stored program instructions to other devices for execution, or may also self-execute the stored program instructions.
In addition, the above-described functions, if implemented in the form of software functions and sold or used as a separate product, may be stored in a mobile terminal-readable storage medium, that is, the present application also provides a storage device storing program data that can be executed to implement the method of the above-described embodiment, the storage device may be, for example, a U-disk, an optical disk, a server, or the like. That is, the present application may be embodied in a software product that includes instructions for causing a smart terminal to perform all or part of the steps of the methods described in the various embodiments.
In the description of the present application, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "plurality" is at least two, such as two, three, etc., unless explicitly defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., may be considered as a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device (which can be a personal computer, server, network device, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions). For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
The foregoing description is only of embodiments of the present application, and is not intended to limit the scope of the patent application, and all equivalent structures or equivalent processes using the descriptions and the contents of the present application or other related technical fields are included in the scope of the patent application.

Claims (14)

1. The path edge tracking method is characterized by comprising the following steps of:
acquiring a predicted road edge point set and a detected road edge point set at the current moment;
grouping the detection path edge point set based on the prediction path edge point set to obtain a grouping result of the detection path edge point set;
acquiring a road edge identification result of whether a road where the vehicle runs at the current moment has incomplete road edges or not;
based on the road edge identification result and the grouping result of the detection road edge point set, carrying out fusion processing on the prediction road edge point set and the detection road edge point set to obtain a fusion road edge point set;
and performing curve fitting on the fusion road edge point set to obtain a road edge fitting result at the current moment.
2. The method according to claim 1, wherein the fusing the predicted and detected sets of road edge points based on the result of the road edge identification and the result of the grouping of the detected set of road edge points includes:
Determining respective fusion weights of the predicted road edge point set and the detected road edge point set based on the road edge identification result and the grouping result of the detected road edge point set;
and carrying out fusion processing on the predicted path edge point set and the detected path edge point set according to the respective fusion weights.
3. The method according to claim 2, wherein the determining the fusion weights of the predicted and detected sets of road edge points based on the result of the road edge identification and the result of the grouping of the detected sets of road edge points, comprises:
if the grouping result of the detection path edge point set does not have a new path edge point set and the path edge recognition result does not have path edge insufficiency, determining that the difference value of the fusion weight of the prediction path edge point set and the fusion weight of the detection path edge point set is smaller than or equal to a preset threshold value.
4. The method according to claim 2, wherein the determining the fusion weights of the predicted and detected sets of road edge points based on the result of the road edge identification and the result of the grouping of the detected sets of road edge points, comprises:
and if the grouping result of the detection path edge point set does not contain a new path edge point set and the path edge recognition result contains a path edge imperfection, determining that the fusion weight of the prediction path edge point set is greater than that of the detection path edge point set.
5. The method according to claim 2, wherein the determining the fusion weights of the predicted and detected sets of road edge points based on the result of the road edge identification and the result of the grouping of the detected sets of road edge points, comprises:
and if the grouping result of the detection path edge point set contains a new path edge point set and the path edge recognition result contains no path edge insufficiency, determining that the fusion weight of the prediction path edge point set is smaller than that of the detection path edge point set.
6. The method according to claim 2, wherein the determining the fusion weights of the predicted and detected sets of road edge points based on the result of the road edge identification and the result of the grouping of the detected sets of road edge points, comprises:
and if the grouping result of the detection path edge point set contains a new path edge point set and the path edge recognition result contains a path edge imperfection, determining that the fusion weight of the prediction path edge point set is smaller than that of the detection path edge point set.
7. The method for tracing a road edge according to any one of claims 1 to 6, wherein said grouping the detected set of road edge points based on the predicted set of road edge points to obtain a grouping result of the detected set of road edge points comprises:
Selecting a target road edge point with a distance value which is not calculated from the detection road edge point set, calculating the distance between the target road edge point and a left road edge prediction point set in the prediction road edge point set to obtain a left road edge distance value, calculating the distance between the target road edge point and a right road edge point set in the prediction road edge point set to obtain a right road edge distance value, and repeating the current steps until all the detection road edge points are traversed;
comparing one of the left road edge distance value and the right road edge distance value of the detection road edge point, which is smaller in value, with a preset distance threshold value;
and determining whether a new road edge point set exists in the grouping result of the detection road edge point set based on the distance comparison result.
8. The method of claim 7, wherein determining whether a new set of waypoints exists in the grouping result of the detected set of waypoints based on the distance comparison result comprises:
if the number of the to-be-determined road edge points, of which the distance value is larger than the preset distance threshold, in the detected road edge points is larger than the preset number threshold, comparing the minimum value of the ordinate values of all the to-be-determined road edge points under the vehicle coordinate system with the preset ordinate threshold;
if the minimum value of the ordinate values of all the to-be-determined route edge points is smaller than a preset ordinate threshold value, determining that a new route edge point set exists in the grouping result of the detection route edge point set, otherwise, determining that no new route edge point set exists in the grouping result of the detection route edge point set.
9. The method according to claim 7, wherein selecting a target edge point from the detected edge point set, the target edge point having no calculated distance value, from the left edge predicted point set in the predicted edge point set, calculates a distance to obtain a left edge distance value, and from the right edge point set in the predicted edge point set, calculates a distance to obtain a right edge distance value, includes:
calculating distance values of the selected target road edge points and all the left road edge predicted points in the left road edge predicted point set, and calculating an average value of all the distance values to be used as the left road edge distance value;
and calculating distance values from the selected target road edge point and all the right road edge predicted points in the right road edge predicted point set, and calculating an average value of all the distance values to be used as the right road edge distance value.
10. The method for tracking a road edge according to any one of claims 1 to 6, wherein the obtaining the predicted set of road edge points at the current time includes:
determining lateral displacement, longitudinal displacement and rotation angle of the vehicle from the previous moment to the current moment based on the vehicle motion information of the previous moment;
and determining a predicted road edge point set at the current moment based on the transverse displacement, the longitudinal displacement, the rotation angle, a translation transformation matrix of a vehicle coordinate system, a rotation transformation matrix of the vehicle coordinate system and coordinate values of the fusion road edge points at the previous moment under the vehicle coordinate system.
11. The method for tracking a road edge according to any one of claims 1 to 6, wherein the curve fitting is performed on the fused road edge point set to obtain a road edge fitting result at a current moment, and the method comprises the following steps:
selecting a plurality of fitting road edge points from the fusion road edge point set;
performing curve fitting on the plurality of fitting road edge points to obtain a fitting curve model;
matching other fusion road edge points with the fitted curve model;
if the number of the fusion road edge points with the matching degree within the preset range is greater than or equal to a preset number threshold, taking a fitting curve model as the road edge fitting result;
if the number of the fusion path edge points with the matching degree within the preset range is smaller than a preset number threshold, returning to the step of selecting a plurality of fitting path edge points from the fusion path edge points.
12. A road edge fitting device, comprising: the device comprises an acquisition module, a grouping module, a fusion module and a fitting module;
the acquisition module is used for acquiring a predicted road edge point set and a detected road edge point set at the current moment;
the grouping module is used for grouping the detection path edge point set based on the prediction path edge point set to obtain a grouping result of the detection path edge point set;
The acquisition module is used for acquiring whether a road which is driven by the vehicle at the current moment has a road edge identification result with incomplete road edges or not;
the fusion module is used for carrying out fusion processing on the predicted path edge point set and the detected path edge point set based on the path edge identification result and the grouping result of the detected path edge point set to obtain a fusion path edge point set;
and the fitting module is used for performing curve fitting on the fusion road edge point set to obtain a road edge fitting result at the current moment.
13. A road edge fitting apparatus, comprising: a processor and a memory, the memory having stored therein a computer program for executing the computer program to implement the method of any of claims 1 to 11.
14. A computer readable storage medium having stored thereon program instructions, which when executed by a processor, implement the method of any of claims 1 to 11.
CN202211637326.6A 2022-12-19 2022-12-19 Road edge tracking method, device, equipment and storage medium Pending CN116091556A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211637326.6A CN116091556A (en) 2022-12-19 2022-12-19 Road edge tracking method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211637326.6A CN116091556A (en) 2022-12-19 2022-12-19 Road edge tracking method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116091556A true CN116091556A (en) 2023-05-09

Family

ID=86187747

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211637326.6A Pending CN116091556A (en) 2022-12-19 2022-12-19 Road edge tracking method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116091556A (en)

Similar Documents

Publication Publication Date Title
Zhao et al. Detection and tracking of pedestrians and vehicles using roadside LiDAR sensors
Zhang et al. Road-segmentation-based curb detection method for self-driving via a 3D-LiDAR sensor
Fernández Llorca et al. Vision‐based vehicle speed estimation: A survey
JP2020052694A (en) Object detection apparatus, object detection method, and computer program for object detection
US11403947B2 (en) Systems and methods for identifying available parking spaces using connected vehicles
CN111971725B (en) Method for determining lane change instructions of a vehicle, readable storage medium and vehicle
Shunsuke et al. GNSS/INS/on-board camera integration for vehicle self-localization in urban canyon
JP6038422B1 (en) Vehicle determination device, vehicle determination method, and vehicle determination program
JP4937844B2 (en) Pedestrian detection device
CN111213153A (en) Target object motion state detection method, device and storage medium
CN113469045B (en) Visual positioning method and system for unmanned integrated card, electronic equipment and storage medium
JP2020067698A (en) Partition line detector and partition line detection method
KR102711127B1 (en) Speed estimation systems and methods without camera calibration
Lee et al. Accurate ego-lane recognition utilizing multiple road characteristics in a Bayesian network framework
CN114419098A (en) Moving target trajectory prediction method and device based on visual transformation
CN116734828A (en) Determination of road topology information, electronic map data processing method and electronic equipment
Li et al. Lane marking quality assessment for autonomous driving
CN112447060A (en) Method and device for recognizing lane and computing equipment
Choi et al. Methods to detect road features for video-based in-vehicle navigation systems
CN113177976A (en) Depth estimation method and device, electronic equipment and storage medium
Xiong et al. Fast and robust approaches for lane detection using multi‐camera fusion in complex scenes
CN116434156A (en) Target detection method, storage medium, road side equipment and automatic driving system
US20240096109A1 (en) Automatic lane marking extraction and classification from lidar scans
Ballardini et al. Ego-lane estimation by modeling lanes and sensor failures
CN116091556A (en) Road edge tracking method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination