WO2023093124A1 - Lane line tracking method and apparatus, and computer device, storage medium and computer program product - Google Patents

Lane line tracking method and apparatus, and computer device, storage medium and computer program product Download PDF

Info

Publication number
WO2023093124A1
WO2023093124A1 PCT/CN2022/110308 CN2022110308W WO2023093124A1 WO 2023093124 A1 WO2023093124 A1 WO 2023093124A1 CN 2022110308 W CN2022110308 W CN 2022110308W WO 2023093124 A1 WO2023093124 A1 WO 2023093124A1
Authority
WO
WIPO (PCT)
Prior art keywords
lane line
image
lane
previous frame
recognized
Prior art date
Application number
PCT/CN2022/110308
Other languages
French (fr)
Chinese (zh)
Inventor
史佳
李晨光
程光亮
石建萍
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2023093124A1 publication Critical patent/WO2023093124A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Definitions

  • the present disclosure relates to but not limited to the technical field of automatic driving, and in particular relates to a lane line tracking method, device, computer equipment, storage medium and computer program product.
  • Lane line detection and tracking is a necessary process in automatic driving.
  • the accuracy of detection and tracking results is related to the safety of automatic driving.
  • the lane line detection and tracking technology in related technologies is not only complicated in the detection and tracking process, but The accuracy of the finalized tracking results is not high.
  • Embodiments of the present disclosure at least provide a lane line tracking method, device, computer equipment, storage medium, and computer program product.
  • An embodiment of the present disclosure provides a lane line tracking method, including:
  • An embodiment of the present disclosure also provides a lane line tracking device, including:
  • the acquiring part is configured to acquire a previous frame image of the image to be recognized, and a first lane line obtained by performing image recognition on the previous frame image;
  • the first determination part is configured to determine, based on the previous frame image and the image to be recognized, a second lane line in the image to be recognized, and a relative distance between the second lane line and the closest first lane Line offset information;
  • the second determining part is configured to determine a first lane line matching at least one of the second lane lines based on the offset information corresponding to each of the second lane lines and each of the first lane lines, A tracking result of at least one second lane line is obtained.
  • An embodiment of the present disclosure also provides a computer device, a processor, and a memory, where the memory stores machine-readable instructions executable by the processor, and the processor is configured to execute the machine-readable instructions stored in the memory, When the machine-readable instructions are executed by the processor, when the machine-readable instructions are executed by the processor, the above first aspect, or the steps in any possible implementation manner of the first aspect are executed.
  • An embodiment of the present disclosure also provides a computer-readable storage medium, on which a computer program is stored.
  • a computer program is stored.
  • the above-mentioned first aspect, or any possible implementation manner in the first aspect is executed. in the steps.
  • An embodiment of the present disclosure provides a computer program product.
  • the computer program product includes a non-transitory computer-readable storage medium storing a computer program.
  • the computer program is read and executed by a computer, a part or part of the above-mentioned method is implemented. All steps.
  • the process of detecting and tracking the second lane line in the image to be recognized by combining the previous frame image, it can make the determination of the second lane line and the determination of the second lane line relative to the closest distance
  • the offset information of the first lane line can be combined with the image features of the previous frame image, it is beneficial to improve the accuracy of the determined second lane line and the offset information corresponding to the second lane line; then, based on The offset information with higher accuracy tracks the second lane line, which can improve the accuracy of the determined tracking result.
  • FIG. 1 is a schematic diagram of an implementation flow of a lane line tracking method provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of a recognized first lane line and a second lane line provided by an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of identifying a second lane line by performing image recognition on an image to be recognized by a reasoning module in a target neural network provided by an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram of an implementation process for image recognition of an image to be recognized by a reasoning module in a target neural network provided by an embodiment of the present disclosure
  • FIG. 5 is a schematic diagram of an implementation process for determining a tracking result corresponding to a second lane line by a tracking module in a target neural network provided by an embodiment of the present disclosure
  • FIG. 6 is a schematic diagram of output prediction offset information provided by an embodiment of the present disclosure.
  • FIG. 7 is a schematic flow diagram of training a target neural network provided by an embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram of the composition and structure of a lane line tracking device provided by an embodiment of the present disclosure.
  • FIG. 9 is a schematic structural diagram of a computer device provided by an embodiment of the present disclosure.
  • lane line detection and tracking are necessary processes in automatic driving, and the accuracy of detection and tracking results is related to the safety of automatic driving.
  • most lane line detection and tracking technologies in related technologies first use neural networks Detect the lane lines, and then use lengthy traditional algorithms (such as Kalman filter and Hungarian algorithm) to post-process the detected lane lines, so as to realize the tracking of the lane lines.
  • embodiments of the present disclosure provide a lane line tracking method, device, computer equipment, storage medium, and computer program product.
  • detecting and tracking the second lane line in the image to be recognized by combining the previous One frame of image can make it possible to combine the image features of the previous frame image when determining the second lane line and determining the offset information of the second lane line relative to the nearest first lane line, which is beneficial to improve the determination
  • the accuracy of the second lane line and the offset information corresponding to the second lane line; then, tracking the second lane line based on the more accurate offset information can improve the accuracy of the determined tracking result.
  • a lane line tracking method disclosed in the embodiments of the present disclosure is firstly introduced in detail.
  • the execution subject of the lane line tracking method provided in the embodiments of the present disclosure is generally a computer device with certain computing power , in some possible implementation manners, the lane line tracking method may be implemented in a manner in which a processor invokes computer-readable instructions stored in a memory.
  • the lane line tracking method provided by the embodiment of the present disclosure will be described below by taking the execution subject as the target neural network as an example.
  • Fig. 1 is a schematic diagram of the implementation flow of a lane line tracking method provided by an embodiment of the present disclosure. As shown in Fig. 1, the method may include the following steps S101 to S103:
  • S101 Acquire a previous frame image of the image to be recognized, and a first lane line obtained by performing image recognition on the previous frame image.
  • the image to be recognized may be an image including lane lines captured by the camera device installed on the target vehicle, and the previous frame image may be captured by the camera device or generated based on the image to be recognized, which is not limited here .
  • the image to be recognized may be input into the target neural network, and the target neural network is used to perform image recognition on the previous frame image to obtain each first lane line in the previous frame image.
  • S102 Based on the previous frame image and the image to be recognized, determine the second lane line in the image to be recognized, and the offset information of the second lane line relative to the closest first lane line.
  • FIG. 2 is a schematic diagram of an identified first lane line and a second lane line provided by an embodiment of the present disclosure, wherein the first lane line includes a first lane line 11 , a first lane line 12 , and a first lane line 13 and the first lane line 14 , the second lane line includes a second lane line 21 , a second lane line 22 , a second lane line 23 and a second lane line 24 .
  • the offset information is used to characterize the offset between the second lane line and the nearest first lane line.
  • the offset information may correspond to each first lane point in the second lane line The positional offset information between the pixel point of and the pixel point corresponding to each second lane point in the nearest first lane line.
  • the lane point can be a key point on the lane line
  • the offset information can be the offset of each second lane line corresponding to the width direction of the image
  • the target neural network predicts the lane point corresponding to each lane line
  • the information of the pixel in the image height direction can be fixed, and the offset information of the pixel in the image width direction can be determined.
  • the target neural network when it performs subsequent lane line matching, it can only calculate the offset information in one direction (image width) to adjust the position of the lane line, and determine the previous lane line that matches the lane line.
  • the lane line in the frame image reduces the amount of calculation, which is beneficial to improve the speed and efficiency of lane line tracking.
  • the target neural network can use the convolutional layer to convolve the previous frame image and the image to be recognized, respectively, to obtain the feature maps corresponding to the two images, and combine the obtained two feature maps to obtain the image to be recognized The corresponding target feature map.
  • the feature map corresponding to each image obtained by convolution may include a thermal feature map, a depth feature map, and the like. Afterwards, based on the image features included in the obtained target feature map, the image features corresponding to each second lane line can be determined, and then each second lane line in the image to be recognized can be determined.
  • the target neural network can also determine the first lane line in the previous frame image according to the image features included in the target feature map; according to the determined second lane line and the first lane line, each second lane line can be determined respectively Corresponding to the first lane line with the closest distance, and further, the offset information of each second lane line relative to the first lane line with the closest distance can be obtained.
  • the offset information can be predicted by the offset branch in the target neural network.
  • S103 Based on the offset information corresponding to each second lane line and each first lane line, determine a first lane line matching at least one second lane line, and obtain a tracking result of at least one second lane line.
  • the tracking result is a result representing whether there is a lane line matching the second lane line in the first lane line, that is, it can represent whether there is a second lane line identical to the first lane line in the image to be recognized
  • the lane line identification information may include a lane line number.
  • the second lane line can be regarded as the same lane line as the first lane line, and the lane line number of the first lane line can be used as the second lane The lane line number of the line, and vice versa, a new lane line number can be generated for the second lane line.
  • the offset information of the second lane line can be used to adjust the position of the lane line of the second lane line to obtain each adjusted second lane line, and then, the The second lane line is matched with the first lane line, and it is determined whether there is a first lane line matching the second lane line.
  • the lane line identification information corresponding to the first lane line matching the second lane line can be used as the lane line identification information of the second lane line, and the lane line identification information can be used as the second lane line identification information tracking results.
  • the first lane line includes three first lane lines whose numbers are 1, 2, and 3 respectively.
  • the lane line number 2 be the lane line number of the second lane line.
  • the second lane line may also be used to replace the first lane line matched with it stored in the local database.
  • the local database stores each identified lane line, the image corresponding to the lane line, image features and lane line identification information.
  • all information related to the lane line can be stored in the local database.
  • the second lane line may be used to replace the matched first lane line, or each second lane point corresponding to the second lane line may be directly used to replace the first lane point corresponding to the first lane line.
  • the second lane line is a newly recognized lane line, and lane line identification information corresponding to the lane line is generated as a tracking result of the second lane line.
  • the second lane line can also be stored in a local database. Continuing the above example, when it is determined that there is no first lane line matching a certain second lane line, a new lane line number 4 may be generated for the second lane line.
  • the above-mentioned embodiment does not need to be combined with a multi-sensor model, and detection and tracking can be realized by using the acquired image to be recognized, which can not only reduce the complexity of detection and tracking, but also improve the universality of the lane line tracking method.
  • the detection and tracking steps are performed in the same target neural network, which can reduce part of the post-processing process for the tracking process, thereby reducing the impact of the detection results of the sensor model on the tracking results and improving tracking accuracy.
  • S102 may be implemented according to the following steps:
  • the heat map feature information can reflect the heat value of each pixel in the image, and the heat map feature information can be expressed in the form of a heat map.
  • the first heat map feature information corresponds to the first heat map, and the pixels corresponding to different lane lines Points have different heating values, for example, for a road image, the heating value of each lane point in the lane line is higher than the heating value of other points on the road.
  • the feature information of the first heat map corresponding to the previous frame image may be obtained by performing image recognition on the previous frame image by the target neural network.
  • the target neural network can perform convolution operations on each image after obtaining the previous frame image, the first heat map corresponding to the previous frame image, and the image to be recognized, to obtain the convolution results corresponding to each image, and then , the convolution results can be fused to obtain the initial feature map corresponding to the initial image features, and then the encoder is used to perform feature encoding on the initial image features to obtain the first feature map corresponding to the image to be recognized, wherein the first feature map includes The first image feature corresponding to the image to be recognized, the first image feature may be a feature vector.
  • S102-2 Based on the first image feature, determine the second thermal map feature information corresponding to the image to be recognized.
  • the target neural network may perform further feature processing on the determined first feature map to obtain feature information of the second heat map. For example, a decoding operation may be performed on the first feature map to determine the thermal value corresponding to each pixel in the first feature map, and then to determine the second thermal map feature information corresponding to the image to be recognized.
  • S102-3 Based on the second heat map feature information and the first image feature, determine a second lane line in the image to be recognized and offset information corresponding to the second lane line.
  • the heat value of each pixel point in the second heat map can be determined first based on the feature information of the second heat map, and then the initial pixel point corresponding to the second lane point in the second lane line can be determined , and then, according to the determined position of each initial pixel point in the second heat map, feature clustering can be performed on the features corresponding to each second lane point, and the second lane line to which each second lane point belongs can be determined .
  • the feature embedding branch in the target neural network can be used to perform feature clustering on the features corresponding to each second lane point to determine each second lane line, wherein the feature embedding branch can be the embedding ( Embedding) layer.
  • the obtained first image features can not only contain the image features of the image to be recognized, but also can be combined with the previous frame image And its corresponding first heat map feature information improves the richness of feature information contained in the first image feature, and then, based on the first image feature containing rich feature information, accurate second heat map features can be obtained information, and because the heat value of the lane line is different from the heat value of other areas in the image where it is located, based on the feature information of the second heat map, the second lane line can be accurately determined, and then the first frame image containing the previous frame image can be used An image feature that can accurately determine offset information corresponding to the second lane line.
  • the maximum number of recognized lane lines can also be preset, and if the number of recognized lane lines in any frame of the image to be recognized is greater than the maximum number, abnormal prompt information can be generated. That is, during the driving process of the target vehicle, the number of lane lines on the road where it is located is limited. If the number of identified lane lines is too large, there may be a problem of detection error, which will affect the safety of automatic driving. Therefore, , by setting the maximum lane line and generating abnormal prompt information, the safety of automatic driving can be improved.
  • the target neural network can also determine the first lane lines in the previous frame image according to the feature processing of the first image features, and then determine the first lane lines and the second lane lines based on the first lane lines and the second lane lines.
  • the distance between each second lane line can determine the first lane line with the shortest distance corresponding to each second lane line.
  • offset information of the second lane line relative to the nearest first lane line can be determined.
  • the step of determining the second lane line, the offset information corresponding to the second lane line, and the second heat map feature information may be completed by an inference module in the target neural network.
  • Figure 3 is a schematic diagram of the recognition of the second lane line by the reasoning module in a target neural network provided by an embodiment of the present disclosure for image recognition to determine the second lane line
  • the reasoning module can respectively correspond to the previous frame image
  • the first thermal map 31, the previous frame image 32 and the image to be recognized 33 are convolved to obtain the convolution results corresponding to each image, and the convolution results are fused to obtain the initial feature map 34 corresponding to the initial image features,
  • the second heat map feature information based on the second heat map feature information, can determine the second heat map 37 corresponding to the image to be recognized, based on the heat value of each pixel in the second heat map 37, can determine the second Lane lines 38;
  • the reasoning module can also determine each first lane line in the previous
  • the target neural network may include a reasoning module and a tracking module, wherein the tracking module is used to determine the first lane line that matches the second lane line in the image to be recognized, and the reasoning module is used to determine each acquired image to be recognized. Recognize the lane line corresponding to the image, the offset information of the lane line and the feature information of the heat map, and after determining the above information corresponding to the image to be recognized, input the above information into the tracking module to complete the second lane in the image to be recognized line tracking.
  • FIG. 4 is a schematic diagram of an implementation process of image recognition of an image to be recognized by an inference module in a target neural network provided by an embodiment of the present disclosure, wherein the image 41 used for recognition includes any frame captured by an automatic driving device (target vehicle) image, input the Nth frame of the image to be recognized 42 into the target neural network 43, the image feature 44 corresponding to the Nth frame of the image to be recognized can be obtained, and then the thermal map feature information 45 and the Nth frame of the Nth image to be recognized can be obtained.
  • the image 41 used for recognition includes any frame captured by an automatic driving device (target vehicle) image
  • input the Nth frame of the image to be recognized 42 into the target neural network 43
  • the image feature 44 corresponding to the Nth frame of the image to be recognized can be obtained
  • the thermal map feature information 45 and the Nth frame of the Nth image to be recognized can be obtained.
  • the offset information 46 of the lane line in the image to be recognized in the frame, the feature embedding branch 47 is used to perform feature clustering on the feature of the initial pixel point corresponding to the lane point (that is, the image feature 44), and determine the Nth frame to be processed currently. Recognize the lane line 48 in the image.
  • the thermal map feature information 45 corresponding to the Nth frame of the image to be recognized is input to the target neural network 43 together with the Nth frame of the image to be recognized, for the N+1 frame Image recognition is performed on the image to be recognized.
  • the previous frame image of the previous frame image and its corresponding thermal map feature information may not be used , directly perform image recognition on the previous frame image, and determine the second image feature corresponding to the previous frame image.
  • the second image feature can be processed to determine the first thermal map feature information corresponding to the previous frame image, and then, based on the first thermal map feature information and the second image feature, the first thermal map feature information in the previous frame image can be determined. lane line.
  • the recognition processing of the first frame image can be carried out directly based on the image, and the first lane line can be obtained without obtaining the corresponding previous frame image and the corresponding heat map feature information, which improves the flexibility of the target neural network processing.
  • the step of determining the first lane line in the previous image frame based on the feature information of the first heat map and the feature of the second target image reference may be made to the step of determining the second lane line in the above-mentioned embodiment.
  • the offset information corresponding to the first frame image output by the target neural network can be deleted.
  • the initial pixel is the pixel in the second heat map.
  • the heat value of each pixel in the second heat map can be determined, and then it can be determined whether each pixel belongs to a lane point, and the pixel belonging to the lane point Points are used as initial pixel points corresponding to each second lane line.
  • the offset information corresponding to each second lane point in the second lane line can be used to determine the initial pixel point corresponding to each second lane point in the second lane line Adjust the position of each initial pixel to determine the target position corresponding to each initial pixel, and use the initial pixel at the target position as the target pixel corresponding to the second lane line.
  • the target pixel points corresponding to each second lane line can be determined according to the following steps 1 to 2:
  • Step 1 For each second lane line, based on the offset information corresponding to the second lane line, determine the offset value corresponding to each initial pixel point in the second lane line.
  • the offset information corresponding to the second lane line can represent the offset value corresponding to each second lane point in the second lane line.
  • the offset value of the initial pixel point corresponding to each second lane point in the second lane line may be determined according to the offset information corresponding to the second lane line.
  • Step 2 Determine the target pixel point corresponding to the second lane line based on the offset value corresponding to each initial pixel point.
  • the offset value of each initial pixel point can be used to adjust the position of the initial pixel point to determine the target pixel point corresponding to each initial pixel point, that is, to obtain each first pixel point in the second lane line
  • the target pixel corresponding to the second lane point The target pixel corresponding to the second lane point.
  • target pixel points corresponding to each second lane point in each second lane line may be determined based on the offset information corresponding to each second lane line and the corresponding initial pixel point respectively.
  • the offset value of each initial pixel point in the second lane line can be obtained, and then the position of each initial pixel point can be adjusted by using the offset value, Get the exact target pixel corresponding to each initial pixel
  • S103-3 Based on the target pixel point corresponding to each second lane line and the pixel point corresponding to each first lane line, determine the first lane line matching at least one second lane line, and obtain at least one second lane line tracking results.
  • the position of the target pixel point corresponding to each second lane point of the second lane line, and each first lane in the first lane line can be The position of each pixel point corresponding to the point determines the initial distance between each target pixel point corresponding to the second lane line and each pixel point in the first lane line.
  • the initial distance can be the distance between the target pixel point at the same height and the pixel point in the first lane line, that is, the target pixel point corresponding to each initial distance and the corresponding distance between the first lane line Pixels are at the same height.
  • the target distance between the second lane line and the first lane line corresponding to each target pixel point can be determined according to the determined initial distance corresponding to each target pixel point, and then, the distance between each second lane line and the first lane line can be determined. Target distance between each first lane line.
  • the second lane line with the shortest target distance between it can be determined, and then, the second lane line can be used as the lane line matching the first lane line, and the Tracking results of the second lane line. Based on this, the tracking result of each second lane line can be determined.
  • each second lane line and each first lane line may also be based on the characteristics of the target pixel points corresponding to each second lane point in the second lane line, and each first lane line in the first lane line The characteristics of each pixel point corresponding to the point determine the similarity between each first lane line and each second lane line, and then determine the tracking result of each second lane line based on the determined similarity.
  • the heat value of each pixel can be determined, and then the second heat value can be determined.
  • the offset information can reflect the prediction error when predicting the second lane line. Using the determined offset information to adjust the position of the initial pixel point can realize the adjustment of the predicted position of the second lane line and determine the second lane line.
  • the exact position of the lane line in the image to be recognized, and then, using the target pixel point corresponding to the second lane line and the pixel point corresponding to each first lane line, the second lane line and each first lane line can be accurately determined Therefore, performing lane line matching based on the determined distance can accurately determine the tracking result corresponding to the second lane line.
  • S103-3 may be implemented according to the following steps:
  • S103-3-1 For each second lane line, based on the feature information of the second heat map, select the pixel points to be matched from the target pixel points of the second lane line.
  • the pixel points to be matched are pixel points screened out from the target pixel points and used to determine the first lane line matching the second lane line.
  • the thermal value corresponding to each target pixel point in the second lane line may be determined first.
  • its corresponding initial pixel point can be determined, and then based on the second heat map feature information, the heat value corresponding to the initial pixel point can be determined, and then the The heat value is used as the heat value corresponding to the target pixel. In this way, the heat value corresponding to each target pixel point in the second lane line can be determined.
  • each target pixel point can be sorted in descending order, and the target pixel point whose sorting order is greater than the preset order is used as the pixel point to be matched.
  • the confidence degree corresponding to each target pixel point can be determined first, and then, the confidence degree greater than Preset the first target pixel of the reliability threshold, and then determine the heat value of each first target pixel based on the second heat map feature information, and sort the first target pixels based on the heat value, and put The first target pixel whose sort order is greater than the preset order is used as the pixel to be matched.
  • S103-3-2 Based on the pixel points corresponding to each first lane line, respectively determine a first distance between each to-be-matched pixel point and each first lane line.
  • the pixel point having the same height as each pixel point to be matched can be screened out, and the initial pixel point corresponding to each pixel point to be matched can be determined based on the screened out pixel points.
  • the first distance between each to-be-matched pixel point and each first lane line can be obtained.
  • S103-3-3 Based on the first distance corresponding to each to-be-matched pixel point, determine a second distance between the second lane line and each first lane line.
  • the second distance may be the Euclidean distance between the two lanes.
  • the second lane line can be determined according to the first distance between each pixel point to be matched in the second lane line corresponding to the pixel point in the first lane line.
  • the average value of the first distance corresponding to each pixel point to be matched in the second lane line can be used as the second distance, or the smallest first distance can be selected as the second distance, or according to each
  • the confidence corresponding to the pixels to be matched determines the weight corresponding to each pixel to be matched, and according to the weight corresponding to each pixel to be matched and its corresponding first distance, the first distance is summed to obtain the second
  • the distance is not limited here.
  • the second distance between each second lane line and each first lane line is determined through the filtered pixel points to be matched, without using each target pixel point in the second lane line for calculation, reducing The calculation amount of determining the second distance is reduced, and the speed of determining the second distance is increased, which is beneficial to improving the speed of determining the tracking result.
  • S103-3-4 Based on the determined second distance, determine a first lane line matching the second lane line.
  • the first lane line with the shortest second distance to the second lane line can be determined based on the second distance between the second lane line and each first lane line.
  • Lane line, the first lane line is used as the lane line matching the second lane line.
  • each second lane line in the plurality of second lane lines can be compared with the second distance corresponding to the first lane line; the shortest second distance is determined, and it is determined that the shortest second distance corresponds to The second lane line of is the lane line that finally matches the first lane line.
  • the two determined matching lane lines can be in one-to-one correspondence, and there is no other matching lane line, thereby improving the rationality and accuracy of the tracking result.
  • the second shortest second distance corresponding to each second lane line can be determined, and then, using the second The two shortest second distances, and the shortest second distances corresponding to other second lane markings determine whether there is a matching first lane marking among the plurality of second lane markings. Therefore, the matching result of each second lane line is determined.
  • the second distance between the second lane line and the first lane line and the first preset Set value for comparison is used to characterize the maximum distance between the matched first lane line and the second lane line. If the second distance is greater than the maximum distance, even if the second The distance is the shortest, and the two lane lines are also considered as mismatched lane lines. In this way, further judgment is made on the shortest second distance and the determined shortest first distance by using the first preset value, which improves the rationality and accuracy of the determined matching result.
  • the first lane line can be used as the final distance of the second lane line corresponding to the shortest second distance. Match lane lines.
  • the second lane line corresponding to the shortest second distance can be used as a new lane line, or That is, it can be determined that there is no lane line matching the second lane line in the first lane line, thereby improving the accuracy of the determined tracking result.
  • the number of successful matches is used to represent the number of times that the first lane line fails to match the second lane line in a row.
  • the number of failed matches can be compared with the second preset value, and if it is determined that the number of failed matches is greater than the second preset value, it means that the first lane line has been repeated multiple times If the matching is not successful, it means that the second lane line has probably disappeared within the driving range of the target vehicle, then the first lane line can be deleted, thereby reducing the number of stored lane lines, reducing the storage space occupied, and at the same time improving detection efficiency.
  • the first lane line may be continuously stored for the next matching.
  • the number of unmatched successes corresponding to the first lane line may be set as an initial value, where the initial value may be 0.
  • Fig. 5 is a schematic diagram of the implementation flow of the tracking module in the target neural network to determine the tracking result corresponding to the second lane line provided by the embodiment of the present disclosure, wherein the tracking module 51 acquires the lane line and the lane line input by the reasoning module After the offset information, it can first determine whether the lane line is the lane line corresponding to the first frame image (S501), that is, judge whether the image to be recognized corresponding to the lane line is the first frame image; For other lane lines matched by the lane line, the lane line identification information corresponding to each lane line can be directly generated (S502), and, when the image to be recognized is the first frame image, the reasoning module can only input the first frame image For the corresponding lane line, the offset information of the lane line is not input.
  • S501 the lane line corresponding to the first frame image
  • the lane line identification information corresponding to each lane line can be directly generated (S502), and, when the image to be recognized is the first frame image, the reasoning module
  • the local database 52 is used to store the information of each lane line determined after the image recognition of the image to be recognized, which can be the lane point information on the lane line, the lane line number information, etc.;
  • the lane line tracking method introduced in determines the obtained tracking result of the lane line input by the reasoning module, as shown in Figure 5, can follow the steps S521 to S524 as follows to determine the obtained track result of the lane line input by the reasoning module : Step S521, based on the thermal map feature information corresponding to the image to be recognized in the Nth frame, determine the pixel points to be matched corresponding to each lane line; Step S522, determine each lane line corresponding to the image to be recognized in the Nth frame and the previous frame image The second distance between the corresponding lane lines; Step S523, determine the tracking result corresponding to each lane line corresponding to the N frame image to be recognized; Step S524, replace the previous frame with the lane line corresponding to the N frame image to be recognized The image corresponds to the lane line on
  • the image to be recognized and the previous frame of image can also be down-sampled respectively by using preset sampling multiples to obtain the image corresponding to the image to be recognized, The sampled new image to be recognized, and the sampled new previous frame image corresponding to the previous frame image.
  • the image to be recognized and the previous frame image may be down-sampled by using a 4-fold down-sampling manner or an 8-fold down-sampling manner.
  • the target neural network can be used to convolve the new image to be recognized, the new previous frame image, and the first heat map corresponding to the previous frame image to obtain the first image features corresponding to the new image to be recognized. Then, the second lane line in the image to be recognized and the offset information corresponding to the second lane line are determined by using the first image feature.
  • the first heat map is obtained by image recognition of the previous frame image, in the process of image recognition of the previous frame image, the downsampling of the previous frame image has been completed, and further, the obtained first A heat map is the first heat map corresponding to the sampled previous frame image, and the first heat map can be directly convolved without downsampling the first heat map.
  • each pixel in the obtained new image (and the new previous frame image and the new image to be recognized) can represent the number of pixels corresponding to the sampling multiple information, and further, using the information of each pixel in the new image is beneficial to determine a more accurate second lane line and its corresponding offset information.
  • the target neural network can be trained to improve prediction accuracy. Therefore, the implementation of the present disclosure also provides a method for identifying Steps to train the target neural network on images:
  • Step T1 Input the sample identification image, the first predicted heat map feature information corresponding to the sample image of the previous frame of the sample identification image, and the sample image of the previous frame into the target neural network, and process the input information through the target neural network to obtain the sample
  • the multiple sample identification images may be images in the same video clip, and each sample image includes lane lines, or the multiple sample identification images may also be images including lane lines captured separately.
  • the plurality of sample identification images may be images in a video clip containing visible lane lines captured by the autonomous vehicle.
  • the feature information of the first predicted heat map corresponding to the sample image of the previous frame may be obtained by image recognition of the sample image of the previous frame by the target neural network, and the predicted lane line may be the predicted lane line in the sample recognition image output by the target neural network.
  • the sample recognition image, the first predicted heat map feature information corresponding to the previous frame sample image of the sample recognition image, and the previous frame sample image can be input into the target neural network, using The reasoning module in the target neural network processes the above-mentioned input information, determines the sample prediction image features corresponding to the sample recognition image, and then determines the second prediction heat map feature corresponding to the sample recognition image based on the sample prediction image features. Furthermore, the predicted lane line corresponding to the sample recognition image and the predicted offset information corresponding to the predicted lane line can be determined based on the second predicted heat map feature and the sample predicted image feature, and then the tracking module is used to determine the corresponding predictive tracking results.
  • Step T2 Based on the second predicted heat map feature information, the marked heat map feature information corresponding to the sample recognition image, the predicted lane line, the marked lane line corresponding to the sample recognition image, the predicted offset information, and the mark offset corresponding to the sample recognition image The amount information, the predicted tracking result corresponding to each predicted lane line and the labeled tracking result corresponding to each marked lane line are used to determine the loss value.
  • the first predicted loss value corresponding to the heat map branch in the target neural network can be determined based on the second predicted heat map feature and the marked heat map feature information (that is, the true value) corresponding to the sample recognition image, based on the predicted lane line
  • the labeled lane line corresponding to the sample recognition image determines the second prediction loss value of the feature embedding branch in the target neural network, and determines the target neural network based on the predicted offset information and the labeled offset information corresponding to the sample recognition image.
  • the first predicted loss value, the second predicted loss value, the third predicted loss value, and the fourth predicted loss value may be used as the corresponding loss value of the target neural network.
  • Step T3 Adjust the network parameter value of the target neural network according to the loss value until the preset training cut-off condition is met, and a trained target neural network is obtained.
  • the preset training cut-off condition may be that the number of rounds of iterative training is greater than the preset number of rounds and/or the prediction accuracy of the trained target neural network reaches the preset accuracy.
  • the target neural network can be iteratively trained according to the loss value, so as to realize the adjustment of the network parameter value of the target neural network; when it is determined that the preset training cut-off condition is satisfied, the training of the target neural network is completed, and the The network parameter value obtained at this time is used as the target network parameter value corresponding to the target neural network, and the target neural network that has been trained at this time is used as the trained target neural network.
  • the target loss values corresponding to all the above information can be determined respectively, and then the target loss value can be adjusted using the loss value
  • the network parameter value of the neural network can improve the prediction accuracy of the target neural network, thereby improving the accurate training of each of the above information output by the target neural network.
  • the predicted offset information includes the predicted offset of the lane line in at least one image direction.
  • the predicted offset information may include an offset in the image width direction and/or an offset in the image height direction.
  • the predicted offset information may include an offset in the image width direction and/or an offset in the image height direction.
  • the offset branch in the target neural network can fix a direction when determining the predicted offset corresponding to the predicted lane line, and output the predicted lane line compared with the previous frame image
  • the predicted offset of the lane line in the other direction For example, the image height direction is fixed, and for each predicted lane line, at each image height, its offset relative to the lane line in the previous frame image in the image width direction can be output.
  • FIG. 6 is a schematic diagram of an output prediction offset information provided by an embodiment of the present disclosure.
  • X represents the offset in the image width direction, each image height in the Y image height direction, and the offset of each predicted lane line in the X direction can be gradually changed with the increase or decrease of the image height Form changes.
  • different offsets can be represented by different colors, and the offsets represented by different colors in the X direction can be shown in the color indicator bar 60.
  • FIG. 6 shows the predicted lane line 1 , predicted offsets 611 , 612 , 613 , and 614 corresponding to predicted lane line 2 , predicted lane line 3 , and predicted lane line 4 , respectively.
  • the unit of the offset in the X direction can be related to the downsampling multiple, for example, when the preset sampling multiple is 8 times, the unit can be 8 ⁇ offset; when the preset sampling multiple is 4 times, The unit can be 4 ⁇ offset.
  • the labeled offset information and predicted The offset information constructs the smoothing loss for predicting the smoothing offset, and uses the smoothing loss to train the target neural network to reduce the process of outputting the predicted offset information of the lane line in the image to be recognized by the trained target neural network
  • the offset corresponding to several lane points adjacent to the same lane line changes suddenly.
  • the offset of the lane point corresponding to image height 10 is -2.5
  • the offset of image height 12 corresponds to The offset of the lane point is -1.5
  • the offset of the lane point corresponding to the image height 13 is -1
  • the offset of the lane point corresponding to the image height 14 is -6
  • the offset of the lane point corresponding to the image height 15 is The case where the offset is 0, here, since the offset of the lane point corresponding to the image height 14 is -6, the offset 0 corresponding to the image height 15, and the offset between -1.5 corresponding to the image height 12
  • the offset difference is much larger than the offset difference between the offset -2.5 corresponding to the image height 10 and the offset -1.5 corresponding to the image height 12, and the offset -1.5 corresponding to the image height 12 and The offset difference between the offset -1 corresponding to the image height 13, therefore, there will be a jump change in the predicted offset information corresponding to the
  • the smoothing loss to train the target neural network can improve the ability of the target neural network to predict smoothness, that is, the trained target neural network can predict the offset -6 at the lane point corresponding to the image height 14
  • the adjusted output offset may be -0.5, so that the smooth prediction offset information corresponding to each lane line as shown in FIG. 6 is obtained.
  • the sample image of the previous frame of the sample identification image may also be acquired by the following steps: performing random translation and/or rotation operations on the sample identification image to obtain the sample image of the previous frame of the sample identification image.
  • the random translation operation can realize the translation of the positions of the pixels in the sample image, that is, complete the translation of the positions of the pixels corresponding to the lane lines in the sample recognition image, and obtain an image different from the sample recognition image.
  • the random rotation operation can realize the rotation of the positions of the pixels in the sample image, and can also obtain an image different from the sample recognition image. In this way, through random translation and/or rotation operations, the position of the pixels in the sample recognition image can be changed, and the simulated previous frame image can be obtained. Therefore, only one frame of sample recognition image can be obtained to complete the target recognition.
  • the training of neural network improves the flexibility of neural network training.
  • FIG. 7 is a schematic diagram of an implementation process for training a target neural network provided by an embodiment of the present disclosure. 7 shows the process of using multiple sample recognition images to train the target neural network, wherein, after the target neural network 70 performs image recognition on the Nth frame of sample recognition images in the multiple sample recognition images, it can determine the first The predicted heat map feature information 701 corresponding to N frames of sample recognition images, each predicted lane line 702 corresponding to the sample recognition image, the predicted offset information 703 of each predicted lane line, and the predicted tracking result 704 of each predicted lane line, and then, it can be Based on the predicted heat map feature information 701 corresponding to the sample recognition image of the Nth frame and the marked heat map feature information corresponding to the sample recognition image of the Nth frame, determine the heat map prediction loss value 711 corresponding to the sample recognition image of the Nth frame of the target neural network, Based on the predicted lane line 702 corresponding to the sample recognition image of the Nth frame and the marked lane line corresponding to the sample recognition image of the Nth frame,
  • the target neural network 70 can be iteratively trained using the respective loss values corresponding to the Nth frame sample recognition image and the respective loss values corresponding to the N+1th frame sample recognition image, so as to realize the network parameter value of the target neural network. Adjustment; when the training cut-off condition is met, the trained target neural network is obtained.
  • the embodiment of the present disclosure also provides a lane line tracking device corresponding to the lane line tracking method. Since the problem-solving principle of the device in the embodiment of the present disclosure is similar to that of the above-mentioned lane line tracking method in the embodiment of the present disclosure, therefore The implementation of the device can refer to the implementation of the method.
  • Fig. 8 is a schematic diagram of the composition and structure of a lane line tracking device provided by an embodiment of the present disclosure. As shown in Fig. 8, the device includes:
  • the acquisition part 801 is configured to acquire a previous frame image of the image to be recognized, and a first lane line obtained by performing image recognition on the previous frame image;
  • the first determining part 802 is configured to determine the second lane line in the image to be recognized based on the previous frame image and the image to be recognized, and the relative distance between the second lane line and the first Lane line offset information;
  • the second determining part 803 is configured to determine the first lane line matching at least one of the second lane lines based on the offset information corresponding to each of the second lane lines and each of the first lane lines , obtaining at least one tracking result of the second lane line.
  • the first determining part 802 is further configured to: determine the identified The first image feature corresponding to the image to be identified; based on the first image feature, determine the second thermal map feature information corresponding to the image to be identified; based on the second thermal map feature information and the first image feature , determining a second lane line in the image to be recognized, and offset information corresponding to the second lane line.
  • the second determining part 803 is further configured to: determine the initial pixel points corresponding to each of the second lane lines based on the second heat map feature information; The offset information corresponding to the two lane lines and the initial pixel points corresponding to each of the second lane lines respectively determine the target pixel points corresponding to each of the second lane lines; based on each of the second lane lines corresponding to The target pixel point and the pixel point corresponding to each of the first lane lines, determine the first lane line matching at least one of the second lane lines, and obtain a tracking result of at least one of the second lane lines.
  • the second determination part 803 is further configured to: for each second lane line, based on the offset information corresponding to the second lane line, determine the An offset value corresponding to each initial pixel point; determining a target pixel point corresponding to the second lane line based on the offset value corresponding to each initial pixel point.
  • the second determining part 803 is further configured to: for each second lane line, based on the second heat map feature information, filter the target pixel points of the second lane line Obtain the pixel points to be matched; based on the pixel points corresponding to each first lane line, respectively determine the first distance between each pixel point to be matched and each of the first lane lines; Determine the second distance between the second lane line and each of the first lane lines; based on the determined second distance, determine the first lane line matching the second lane line .
  • the second determining part 803 is further configured to: after determining the first lane line that matches the second lane line, when multiple second lane lines match the same first lane line, In the case of a lane line, it is determined that the second lane line with the shortest distance from the first lane line matches the first lane line.
  • the second determining part 803 is further configured to use the first lane line corresponding to the shortest second distance as the first lane line corresponding to the shortest second distance when the shortest second distance is less than the first preset value.
  • the second lane line matches the lane line.
  • the second determining part 803 is further configured to use the second lane line as a new lane line when the shortest second distance is greater than a first preset value.
  • the device further includes: a deleting part 804 configured to: after obtaining the tracking result, for each of the first lane lines that are not successfully matched, determine the The number of unsuccessful matches; the number of unsuccessful matches is used to characterize the number of times that the corresponding first lane line does not match the second lane line in consecutive multiple frames of images to be recognized; when the number of successful mismatches is greater than the second preset value case, delete the first lane line.
  • a deleting part 804 configured to: after obtaining the tracking result, for each of the first lane lines that are not successfully matched, determine the The number of unsuccessful matches; the number of unsuccessful matches is used to characterize the number of times that the corresponding first lane line does not match the second lane line in consecutive multiple frames of images to be recognized; when the number of successful mismatches is greater than the second preset value case, delete the first lane line.
  • the lane line tracking method is implemented by a target neural network, and the target neural network is trained by using multiple sample recognition images.
  • the device further includes: a training part 805 configured to: train the target neural network by adopting the following steps: The first predicted heat map feature information and the previous frame sample image are input into the target neural network, and the target neural network outputs the second predicted heat map feature information corresponding to the sample recognition image, and the sample recognition image corresponding to Predicted lane lines, predicted offset information corresponding to the predicted lane lines, and predicted tracking results corresponding to each of the predicted lane lines; Map feature information, the predicted lane line, the labeled lane line corresponding to the sample recognition image, the predicted offset information, the labeled offset information corresponding to the sample recognized image, and each predicted lane line Determine the loss value for the corresponding prediction tracking result and the label tracking result corresponding to each labeled lane line; adjust the network parameter value of the target neural network according to the loss value until the preset training cut-off condition is met, and the trained target is obtained Neural Networks.
  • a training part 805 configured to: train the target neural network by adopting the following steps: The first predicted heat map
  • the predicted offset information includes an offset of the predicted lane line in at least one image direction.
  • the training part 805 is further configured to acquire the sample image of the previous frame of the sample recognition image through the following steps: performing random translation and/or rotation operations on the sample recognition image to obtain the The image in the previous frame of the sample recognition image.
  • the acquisition part 801 is further configured to: if the previous frame image is the first frame image, perform image recognition on the previous frame image, and determine that the previous frame image corresponds to The second image feature; based on the second image feature, determine the first thermal map feature information corresponding to the previous frame image; based on the first thermal map feature information and the second image feature, determine the The first lane line in the previous image frame.
  • the first determining part 802 is further configured to: respectively perform downsampling processing on the image to be recognized and the image of the previous frame according to a preset sampling multiple to obtain a new image to be recognized and a new previous frame image; based on the new previous frame image and the new image to be recognized, determine the second lane line in the image to be recognized, and the second lane line relative to the nearest first Offset information of a lane line.
  • a "part" may be a part of a circuit, a part of a processor, a part of a program or software, etc., of course, it may also be a unit, and it may also be a module or a non-module of.
  • FIG. 9 is a schematic structural diagram of a computer device provided by an embodiment of the present disclosure, including:
  • Processor 91 and memory 92 stores machine-readable instructions executable by the processor 91, the processor 91 is used to execute the machine-readable instructions stored in the memory 92, and the machine-readable instructions are executed by the processor 91 During execution, the processor 91 performs the following steps: S101: Acquire the previous frame image of the image to be recognized, and the first lane line obtained by performing image recognition on the previous frame image; S102: Based on the previous frame image and the image to be recognized Recognize the image, determine the second lane line in the image to be recognized, and the offset information of the second lane line relative to the nearest first lane line and S103: Based on the offset information corresponding to each second lane line and For each first lane line, determine the first lane line matching the at least one second lane line, and obtain a tracking result of the at least one second lane line.
  • S101 Acquire the previous frame image of the image to be recognized, and the first lane line obtained by performing image recognition on the previous frame image
  • S102 Based on the previous frame
  • Storer 92 comprises memory 921 and external memory 922;
  • memory 921 is also called internal memory, is used for temporarily storing computing data in the processor 91, and the data exchanged with external memory 922 such as hard disk, processor 91 communicates with memory 921 through memory 921.
  • the external memory 922 performs data exchange.
  • An embodiment of the present disclosure further provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps of the lane line tracking method described in the foregoing method embodiments are executed.
  • the storage medium may be a volatile or non-volatile computer-readable storage medium.
  • the computer program product of the lane line tracking method provided by the embodiments of the present disclosure includes a computer-readable storage medium storing program codes, and the instructions included in the program code can be used to execute the lane line tracking method described in the above method embodiments
  • the computer program product can be specifically realized by means of hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK) etc. wait.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the functions are realized in the form of software function units and sold or used as independent products, they can be stored in a non-volatile computer-readable storage medium executable by a processor.
  • the technical solution of the present disclosure is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disc and other media that can store program codes. .
  • Embodiments of the present disclosure provide a lane line tracking method, device, computer equipment, storage medium, and computer program product, wherein the method includes: acquiring a previous frame image of the image to be recognized, and performing image recognition on the previous frame image The obtained first lane line; based on the previous frame image and the image to be recognized, determine the second lane line in the image to be recognized, and the offset information of the second lane line relative to the nearest first lane line; Based on the offset information corresponding to each second lane line and each first lane line, determine the first lane line matching the at least one second lane line, and obtain a tracking result of the at least one second lane line.
  • the accuracy of the determined second lane line and the offset information corresponding to the second lane line can be improved, so that Improve the accuracy of determined trace results.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

Provided in the embodiments of the present disclosure are a lane line tracking method and apparatus, a computer device, a storage medium and a computer program product. The method comprises: acquiring the previous frame of image of an image to be subjected to recognition, and first lane lines, which are obtained by performing image recognition on the previous frame of image; on the basis of the previous frame of image and the image to be subjected to recognition, determining second lane lines in the image to be subjected to recognition, and offset information of each second lane line relative to the closest first lane line; and on the basis of the offset information corresponding to each second lane line, and each first lane line, determining a first lane line which matches at least one second lane line, so as to obtain a tracking result of the at least one second lane line.

Description

一种车道线跟踪方法、装置、计算机设备、存储介质和计算机程序产品A lane line tracking method, device, computer equipment, storage medium and computer program product
相关申请的交叉引用Cross References to Related Applications
本公开基于申请号为202111433344.8、申请日为2021年11月29日、申请名称为“一种车道线跟踪方法、装置、计算机设备和存储介质”的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本公开作为参考。This disclosure is based on the Chinese patent application with the application number 202111433344.8, the application date is November 29, 2021, and the application name is "A lane line tracking method, device, computer equipment and storage medium", and requires the Chinese patent application Priority, the entire content of the Chinese patent application is hereby incorporated by reference into this disclosure.
技术领域technical field
本公开涉及但不限于自动驾驶技术领域,尤其涉及一种车道线跟踪方法、装置、计算机设备、存储介质和计算机程序产品。The present disclosure relates to but not limited to the technical field of automatic driving, and in particular relates to a lane line tracking method, device, computer equipment, storage medium and computer program product.
背景技术Background technique
车道线检测和追踪作为自动驾驶中的必要过程,检测和追踪的结果的准确性关系着自动驾驶的安全性,但相关技术中的车道线检测和追踪技术,不仅检测和追踪的过程复杂,而且最后确定的追踪结果的准确性不高。Lane line detection and tracking is a necessary process in automatic driving. The accuracy of detection and tracking results is related to the safety of automatic driving. However, the lane line detection and tracking technology in related technologies is not only complicated in the detection and tracking process, but The accuracy of the finalized tracking results is not high.
发明内容Contents of the invention
本公开实施例至少提供一种车道线跟踪方法、装置、计算机设备、存储介质和计算机程序产品。Embodiments of the present disclosure at least provide a lane line tracking method, device, computer equipment, storage medium, and computer program product.
本公开实施例提供了一种车道线跟踪方法,包括:An embodiment of the present disclosure provides a lane line tracking method, including:
获取待识别图像的前一帧图像,以及对所述前一帧图像进行图像识别得到的第一车道线;Acquiring the previous frame image of the image to be recognized, and the first lane line obtained by performing image recognition on the previous frame image;
基于所述前一帧图像和所述待识别图像,确定所述待识别图像中的第二车道线,以及所述第二车道线相对于距离最近的第一车道线的偏移量信息;Based on the previous frame image and the image to be recognized, determine a second lane line in the image to be recognized, and offset information of the second lane line relative to the nearest first lane line;
基于每个所述第二车道线对应的偏移量信息和每个所述第一车道线,确定与至少一个所述第二车道线匹配的第一车道线,得到至少一个所述第二车道线的追踪结果。Based on the offset information corresponding to each of the second lane lines and each of the first lane lines, determine a first lane line that matches at least one of the second lane lines, and obtain at least one of the second lanes Line tracking results.
本公开实施例还提供一种车道线跟踪装置,包括:An embodiment of the present disclosure also provides a lane line tracking device, including:
获取部分,被配置为获取待识别图像的前一帧图像,以及对所述前一帧图像进行图像识别得到的第一车道线;The acquiring part is configured to acquire a previous frame image of the image to be recognized, and a first lane line obtained by performing image recognition on the previous frame image;
第一确定部分,被配置为基于所述前一帧图像和所述待识别图像,确定所述待识别图像中的第二车道线,以及所述第二车道线相对于距离最近的第一车道线的偏移量信息;The first determination part is configured to determine, based on the previous frame image and the image to be recognized, a second lane line in the image to be recognized, and a relative distance between the second lane line and the closest first lane Line offset information;
第二确定部分,被配置为基于每个所述第二车道线对应的偏移量信息和每个所述第一车道线,确定与至少一个所述第二车道线匹配的第一车道线,得到至少一个所述第二车道线的追踪结果。The second determining part is configured to determine a first lane line matching at least one of the second lane lines based on the offset information corresponding to each of the second lane lines and each of the first lane lines, A tracking result of at least one second lane line is obtained.
本公开实施例还提供一种计算机设备,处理器、存储器,所述存储器存储有所述处理器可执行的机器可读指令,所述处理器用于执行所述存储器中存储的机器可读指令,所述机器可读指令被所述处理器执行时,所述机器可读指令被所述处理器执行时执行上述第一方面,或第一方面中任一种可能的实施方式中的步骤。An embodiment of the present disclosure also provides a computer device, a processor, and a memory, where the memory stores machine-readable instructions executable by the processor, and the processor is configured to execute the machine-readable instructions stored in the memory, When the machine-readable instructions are executed by the processor, when the machine-readable instructions are executed by the processor, the above first aspect, or the steps in any possible implementation manner of the first aspect are executed.
本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被运行时执行上述第一方面,或第一方面中任一种可能的实施方式中的步骤。An embodiment of the present disclosure also provides a computer-readable storage medium, on which a computer program is stored. When the computer program is executed, the above-mentioned first aspect, or any possible implementation manner in the first aspect is executed. in the steps.
本公开实施例提供一种计算机程序产品,所述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,所述计算机程序被计算机读取并执行时,实现上述 方法中的部分或全部步骤。An embodiment of the present disclosure provides a computer program product. The computer program product includes a non-transitory computer-readable storage medium storing a computer program. When the computer program is read and executed by a computer, a part or part of the above-mentioned method is implemented. All steps.
本公开实施例中,在对待识别图像中的第二车道线进行检测和追踪的过程中,通过结合前一帧图像,可以使得在确定第二车道线、以及确定第二车道线相对于距离最近的第一车道线的偏移量信息时,能够结合前一帧图像的图像特征,有利于提高确定的第二车道线、以及第二车道线对应的偏移量信息的准确性;然后,基于准确性更高的偏移量信息对第二车道线进行追踪,能够提高确定的追踪结果的准确性。In the embodiment of the present disclosure, in the process of detecting and tracking the second lane line in the image to be recognized, by combining the previous frame image, it can make the determination of the second lane line and the determination of the second lane line relative to the closest distance When the offset information of the first lane line can be combined with the image features of the previous frame image, it is beneficial to improve the accuracy of the determined second lane line and the offset information corresponding to the second lane line; then, based on The offset information with higher accuracy tracks the second lane line, which can improve the accuracy of the determined tracking result.
为使本公开实施例的上述目的、特征和优点能更明显易懂,下文特举示例性实施例,并配合所附附图,作详细说明如下。In order to make the above objects, features and advantages of the embodiments of the present disclosure more comprehensible, exemplary embodiments will be described in detail below together with the accompanying drawings.
附图说明Description of drawings
为了更清楚地说明本公开实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,此处的附图被并入说明书中并构成本说明书中的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。应当理解,以下附图仅示出了本公开的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。In order to illustrate the technical solutions of the embodiments of the present disclosure more clearly, the following will briefly introduce the accompanying drawings used in the embodiments. The accompanying drawings here are incorporated into the specification and constitute a part of the specification. The drawings show the embodiments consistent with the present disclosure, and are used together with the description to explain the technical solution of the present disclosure. It should be understood that the following drawings only show some embodiments of the present disclosure, and therefore should not be regarded as limiting the scope. For those skilled in the art, they can also make From these drawings other related drawings are obtained.
图1为本公开实施例提供的一种车道线跟踪方法的实现流程示意图;FIG. 1 is a schematic diagram of an implementation flow of a lane line tracking method provided by an embodiment of the present disclosure;
图2为本公开实施例提供的一种识别出的第一车道线和第二车道线的示意图;FIG. 2 is a schematic diagram of a recognized first lane line and a second lane line provided by an embodiment of the present disclosure;
图3为本公开实施例提供的一种目标神经网络中的推理模块对待识别图像进行图像识别,确定第二车道线的识别示意图;FIG. 3 is a schematic diagram of identifying a second lane line by performing image recognition on an image to be recognized by a reasoning module in a target neural network provided by an embodiment of the present disclosure;
图4为本公开实施例提供的一种目标神经网络中的推理模块对待识别图像进行图像识别的实现流程示意图;FIG. 4 is a schematic diagram of an implementation process for image recognition of an image to be recognized by a reasoning module in a target neural network provided by an embodiment of the present disclosure;
图5为本公开实施例提供的一种目标神经网络中的追踪模块确定出第二车道线对应的追踪结果的实现流程示意图;FIG. 5 is a schematic diagram of an implementation process for determining a tracking result corresponding to a second lane line by a tracking module in a target neural network provided by an embodiment of the present disclosure;
图6为本公开实施例提供的一种输出的预测偏移量信息的示意图;FIG. 6 is a schematic diagram of output prediction offset information provided by an embodiment of the present disclosure;
图7为本公开实施例提供的一种对目标神经网络进行训练的流程示意图;FIG. 7 is a schematic flow diagram of training a target neural network provided by an embodiment of the present disclosure;
图8为本公开实施例提供的一种车道线跟踪装置的组成结构示意图;FIG. 8 is a schematic diagram of the composition and structure of a lane line tracking device provided by an embodiment of the present disclosure;
图9为本公开实施例提供的一种计算机设备结构示意图。FIG. 9 is a schematic structural diagram of a computer device provided by an embodiment of the present disclosure.
具体实施方式Detailed ways
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。通常在此处描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对本公开的实施例的详细描述并非旨在限制要求保护的本公开的范围,而是仅仅表示本公开的选定实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。In order to make the purpose, technical solutions and advantages of the embodiments of the present disclosure clearer, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present disclosure. Obviously, the described embodiments are only It is a part of the embodiments of the present disclosure, but not all of them. The components of the disclosed embodiments generally described and illustrated herein may be arranged and designed in a variety of different configurations. Accordingly, the following detailed description of the embodiments of the present disclosure is not intended to limit the scope of the claimed disclosure, but merely represents selected embodiments of the present disclosure. Based on the embodiments of the present disclosure, all other embodiments obtained by those skilled in the art without creative effort shall fall within the protection scope of the present disclosure.
另外,本公开实施例中的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。In addition, the terms "first", "second" and the like in the description and claims in the embodiments of the present disclosure and the above drawings are used to distinguish similar objects, and not necessarily used to describe a specific sequence or sequence. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments described herein can be practiced in sequences other than those illustrated or described herein.
在本文中提及的“多个或者若干个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。"Plural or several" mentioned herein means two or more. "And/or" describes the association relationship of associated objects, indicating that there may be three types of relationships, for example, A and/or B may indicate: A exists alone, A and B exist simultaneously, and B exists independently. The character "/" generally indicates that the contextual objects are an "or" relationship.
经研究发现,车道线检测和追踪作为自动驾驶中的必要过程,检测和追踪的结果的准确性关系着自动驾驶的安全性,但相关技术中的车道线检测和追踪技术,大多先利用神经网络对车道线进行检测,再使用冗长的传统算法(如卡尔曼滤波和匈牙利算法)对检测到的车道线进行后处理,从而实现对车道线的追踪。It has been found through research that lane line detection and tracking are necessary processes in automatic driving, and the accuracy of detection and tracking results is related to the safety of automatic driving. However, most lane line detection and tracking technologies in related technologies first use neural networks Detect the lane lines, and then use lengthy traditional algorithms (such as Kalman filter and Hungarian algorithm) to post-process the detected lane lines, so as to realize the tracking of the lane lines.
在使用传统算法进行车道线跟踪的过程中,还需要结合复杂的传感器模型。这样,不仅提高了检测和追踪过程的复杂性,还使检测结果受限于各个传感器模型的检测结果,极大的影响了车道线跟踪结果的准确性。In the process of lane line tracking using traditional algorithms, complex sensor models also need to be incorporated. This not only increases the complexity of the detection and tracking process, but also makes the detection results limited by the detection results of each sensor model, which greatly affects the accuracy of the lane line tracking results.
基于上述研究,本公开实施例提供了一种车道线跟踪方法、装置、计算机设备、存储介质和计算机程序产品,在对待识别图像中的第二车道线进行检测和追踪的过程中,通过结合前一帧图像,可以使得在确定第二车道线、以及确定第二车道线相对于距离最近的第一车道线的偏移量信息时,能够结合前一帧图像的图像特征,有利于提高确定的第二车道线、以及第二车道线对应的偏移量信息的准确性;然后,基于准确性更高的偏移量信息对第二车道线进行追踪,能够提高确定的追踪结果的准确性。Based on the above studies, embodiments of the present disclosure provide a lane line tracking method, device, computer equipment, storage medium, and computer program product. During the process of detecting and tracking the second lane line in the image to be recognized, by combining the previous One frame of image can make it possible to combine the image features of the previous frame image when determining the second lane line and determining the offset information of the second lane line relative to the nearest first lane line, which is beneficial to improve the determination The accuracy of the second lane line and the offset information corresponding to the second lane line; then, tracking the second lane line based on the more accurate offset information can improve the accuracy of the determined tracking result.
针对以上方案所存在的缺陷,均是发明人在经过实践并仔细研究后得出的结果,因此,上述问题的发现过程以及下文中本公开实施例针对上述问题所提出的解决方案,都应该是发明人在本公开过程中对本公开做出的贡献。The defects in the above solutions are all the results obtained by the inventor after practice and careful research. Therefore, the discovery process of the above problems and the solutions to the above problems proposed by the embodiments of the present disclosure below should be Contributions made by the inventors to the disclosure during the course of the disclosure.
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。It should be noted that like numerals and letters denote similar items in the following figures, therefore, once an item is defined in one figure, it does not require further definition and explanation in subsequent figures.
为便于对本公开实施例进行理解,首先对本公开实施例所公开的一种车道线跟踪方法进行详细介绍,本公开实施例所提供的车道线跟踪方法的执行主体一般为具有一定计算能力的计算机设备,在一些可能的实现方式中,该车道线跟踪方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。In order to facilitate the understanding of the embodiments of the present disclosure, a lane line tracking method disclosed in the embodiments of the present disclosure is firstly introduced in detail. The execution subject of the lane line tracking method provided in the embodiments of the present disclosure is generally a computer device with certain computing power , in some possible implementation manners, the lane line tracking method may be implemented in a manner in which a processor invokes computer-readable instructions stored in a memory.
下面以执行主体为目标神经网络为例对本公开实施例提供的车道线跟踪方法加以说明。The lane line tracking method provided by the embodiment of the present disclosure will be described below by taking the execution subject as the target neural network as an example.
图1为本公开实施例提供的一种车道线跟踪方法的实现流程示意图,如图1所示,该方法可以包括以下步骤S101至步骤S103:Fig. 1 is a schematic diagram of the implementation flow of a lane line tracking method provided by an embodiment of the present disclosure. As shown in Fig. 1, the method may include the following steps S101 to S103:
S101:获取待识别图像的前一帧图像,以及对前一帧图像进行图像识别得到的第一车道线。S101: Acquire a previous frame image of the image to be recognized, and a first lane line obtained by performing image recognition on the previous frame image.
这里,待识别图像可以为设置在目标车辆上的摄像装置拍摄的包含车道线的图像,前一帧图像可以为摄像装置在先拍摄的,也可以为根据待识别图像生成的,这里不进行限定。Here, the image to be recognized may be an image including lane lines captured by the camera device installed on the target vehicle, and the previous frame image may be captured by the camera device or generated based on the image to be recognized, which is not limited here .
在实施时,在获取到待识别图像之后,可以将待识别图像输入目标神经网络,利用目标神经网络对前一帧图像进行图像识别,得到前一帧图像中的各个第一车道线。During implementation, after the image to be recognized is acquired, the image to be recognized may be input into the target neural network, and the target neural network is used to perform image recognition on the previous frame image to obtain each first lane line in the previous frame image.
S102:基于前一帧图像和待识别图像,确定待识别图像中的第二车道线,以及第二车道线相对于距离最近的第一车道线的偏移量信息。S102: Based on the previous frame image and the image to be recognized, determine the second lane line in the image to be recognized, and the offset information of the second lane line relative to the closest first lane line.
图2为本公开实施例提供的一种识别出的第一车道线和第二车道线的示意图,其中,第一车道线包括第一车道线11、第一车道线12、第一车道线13和第一车道线14,第二车道线包括第二车道线21、第二车道线22、第二车道线23和第二车道线24。FIG. 2 is a schematic diagram of an identified first lane line and a second lane line provided by an embodiment of the present disclosure, wherein the first lane line includes a first lane line 11 , a first lane line 12 , and a first lane line 13 and the first lane line 14 , the second lane line includes a second lane line 21 , a second lane line 22 , a second lane line 23 and a second lane line 24 .
偏移量信息用于表征第二车道线与距离最近的第一车道线之间的偏移量,在一些实施方式中,偏移量信息可以为第二车道线中的各个第一车道点对应的像素点与距离最近的第一车道线中的各个第二车道点对应的像素点之间的位置偏移量信息。其中,车道点可以为车道线上的关键点,偏移量信息可以为各个第二车道线对应于图像宽度方向上的偏移量,目标神经网络在预测每个车道线上的车道点对应的偏移量时,针对每个车道点对应的像素点,可以固定该像素点在图像高度方向上的信息,确定出该像素点在图像宽 度方向上的偏移量信息。这样,目标神经网络在后续进行车道线的匹配时,可以只计算一个方向(图像宽度)上的偏移量信息就可实现对车道线位置的调整,并确定与该车道线相匹配的前一帧图像中的车道线,减少了计算量,有利于提高车道线跟踪的速度和效率。The offset information is used to characterize the offset between the second lane line and the nearest first lane line. In some implementations, the offset information may correspond to each first lane point in the second lane line The positional offset information between the pixel point of and the pixel point corresponding to each second lane point in the nearest first lane line. Wherein, the lane point can be a key point on the lane line, the offset information can be the offset of each second lane line corresponding to the width direction of the image, and the target neural network predicts the lane point corresponding to each lane line For the offset, for the pixel corresponding to each lane point, the information of the pixel in the image height direction can be fixed, and the offset information of the pixel in the image width direction can be determined. In this way, when the target neural network performs subsequent lane line matching, it can only calculate the offset information in one direction (image width) to adjust the position of the lane line, and determine the previous lane line that matches the lane line. The lane line in the frame image reduces the amount of calculation, which is beneficial to improve the speed and efficiency of lane line tracking.
在实施时,目标神经网络可以利用卷积层分别对前一帧图像和待识别图像进行卷积,得到两个图像分别对应的特征图,并将得到的两个特征图结合,得到待识别图像对应的目标特征图。其中,卷积得到的各个图像对应的特征图可以包括热力特征图、深度特征图等。之后,可以基于得到的目标特征图中所包括的图像特征,确定每个第二车道线所对应的图像特征,进而,可以确定出待识别图像中的每个第二车道线。During implementation, the target neural network can use the convolutional layer to convolve the previous frame image and the image to be recognized, respectively, to obtain the feature maps corresponding to the two images, and combine the obtained two feature maps to obtain the image to be recognized The corresponding target feature map. Wherein, the feature map corresponding to each image obtained by convolution may include a thermal feature map, a depth feature map, and the like. Afterwards, based on the image features included in the obtained target feature map, the image features corresponding to each second lane line can be determined, and then each second lane line in the image to be recognized can be determined.
目标神经网络还可以根据目标特征图中所包括的图像特征,确定出前一帧图像中的第一车道线;根据确定的第二车道线和第一车道线,可以分别确定出各个第二车道线对应的距离最近的第一车道线,进而,可以得到各个第二车道线相对于距离最近的第一车道线的偏移量信息。The target neural network can also determine the first lane line in the previous frame image according to the image features included in the target feature map; according to the determined second lane line and the first lane line, each second lane line can be determined respectively Corresponding to the first lane line with the closest distance, and further, the offset information of each second lane line relative to the first lane line with the closest distance can be obtained.
其中,偏移量信息可以由目标神经网络中的偏移量分支进行预测。Among them, the offset information can be predicted by the offset branch in the target neural network.
S103:基于每个第二车道线对应的偏移量信息和每个第一车道线,确定与至少一个第二车道线匹配的第一车道线,得到至少一个第二车道线的追踪结果。S103: Based on the offset information corresponding to each second lane line and each first lane line, determine a first lane line matching at least one second lane line, and obtain a tracking result of at least one second lane line.
这里,追踪结果为表征在第一车道线中是否存在与第二车道线相匹配的车道线的结果,也即,可以表征待识别图像中是否存在与第一车道线相同的第二车道线Here, the tracking result is a result representing whether there is a lane line matching the second lane line in the first lane line, that is, it can represent whether there is a second lane line identical to the first lane line in the image to be recognized
每个第一车道线可以存在一个用于标识车道线身份的车道线标识信息,例如,车道线标识信息可以包括车道线编号。在实施时,若第二车道线存在相匹配的第一车道线,则可以将该第二车道线作为与第一车道线相同的车道线,将第一车道线的车道线编号作为第二车道线的车道线编号,反之,则可以为第二车道线生成一个新的车道线编号。For each first lane line, there may be a piece of lane line identification information for identifying the identity of the lane line. For example, the lane line identification information may include a lane line number. During implementation, if there is a matching first lane line in the second lane line, the second lane line can be regarded as the same lane line as the first lane line, and the lane line number of the first lane line can be used as the second lane The lane line number of the line, and vice versa, a new lane line number can be generated for the second lane line.
针对每个第二车道线,可以利用该第二车道线的偏移量信息,先对该第二车道线进行车道线位置的调整,得到每个调整后的第二车道线,然后,将该第二车道线与第一车道线进行匹配,确定是否存在与该第二车道线相匹配的第一车道线。For each second lane line, the offset information of the second lane line can be used to adjust the position of the lane line of the second lane line to obtain each adjusted second lane line, and then, the The second lane line is matched with the first lane line, and it is determined whether there is a first lane line matching the second lane line.
如果是,则可以将与该第二车道线相匹配的第一车道线对应的车道线标识信息作为该第二车道线的车道线标识信息,并将该车道线标识信息作为该第二车道线的追踪结果。例如,第一车道线包括编号分别为1、2、3的三条第一车道线,在确定与某一第二车道线相匹配的第一车道线的车道线编号为2的情况下,则可以将车道线编号2作为该第二车道线的车道线编号。If so, the lane line identification information corresponding to the first lane line matching the second lane line can be used as the lane line identification information of the second lane line, and the lane line identification information can be used as the second lane line identification information tracking results. For example, the first lane line includes three first lane lines whose numbers are 1, 2, and 3 respectively. When it is determined that the lane line number of the first lane line matching a certain second lane line is 2, then it can be Let the lane line number 2 be the lane line number of the second lane line.
另外,还可以利用该第二车道线,替换存储在本地数据库中的与其相匹配的第一车道线。其中,本地数据库中存储有识别出的每个车道线、车道线对应的图像、图像特征和车道线标识信息,在一些实施方式中,与车道线有关的所有信息均可以存储在本地数据库中。例如,可以利用第二车道线替换与之相匹配的第一车道线,也可以直接利用第二车道线对应的每个第二车道点替换该第一车道线对应的第一车道点。In addition, the second lane line may also be used to replace the first lane line matched with it stored in the local database. Wherein, the local database stores each identified lane line, the image corresponding to the lane line, image features and lane line identification information. In some implementations, all information related to the lane line can be stored in the local database. For example, the second lane line may be used to replace the matched first lane line, or each second lane point corresponding to the second lane line may be directly used to replace the first lane point corresponding to the first lane line.
如果否,则可以确定该第二车道线为新识别出的车道线,生成该车道线对应的车道线标识信息,并将其作为该第二车道线的追踪结果。此外,还可以将该第二车道线存储在本地数据库中。延续上例,在确定不存在与某一第二车道线相匹配的第一车道线的情况下,可以为该第二车道线生成新的车道线编号4。If not, it may be determined that the second lane line is a newly recognized lane line, and lane line identification information corresponding to the lane line is generated as a tracking result of the second lane line. In addition, the second lane line can also be stored in a local database. Continuing the above example, when it is determined that there is no first lane line matching a certain second lane line, a new lane line number 4 may be generated for the second lane line.
这样,首先只需要一个目标神经网络,就可以实现对车道线的检测和追踪,提高了检测和追踪之间的关联性,两者的结合有利于提高检测的准确性。并且上述实施方式不需要结合多传感器模型,利用获取的待识别图像即可实现检测和追踪,既可以降低检测和追踪的复杂性,又可以提高车道线跟踪方法的普适性。此外,检测和追踪的步骤在同一个目标神经网络中执行,可以减少针对追踪过程的部分后处理流程,从而减少了传感 器模型的检测结果对追踪结果的影响,提高了追踪精度。另外,在对待识别图像中的第二车道线进行追踪的过程中,通过结合前一帧图像,可以使得在确定第二车道线对应的偏移量信息时,能够结合前一帧图像的图像特征,从而,有利于提高确定的偏移量信息的准确性,然后,基于准确性更高的偏移量信息对第二车道线进行追踪,能够提高确定的追踪结果的准确性。In this way, first of all, only one target neural network is needed to realize the detection and tracking of lane lines, which improves the correlation between detection and tracking, and the combination of the two is conducive to improving the accuracy of detection. Moreover, the above-mentioned embodiment does not need to be combined with a multi-sensor model, and detection and tracking can be realized by using the acquired image to be recognized, which can not only reduce the complexity of detection and tracking, but also improve the universality of the lane line tracking method. In addition, the detection and tracking steps are performed in the same target neural network, which can reduce part of the post-processing process for the tracking process, thereby reducing the impact of the detection results of the sensor model on the tracking results and improving tracking accuracy. In addition, in the process of tracking the second lane line in the image to be recognized, by combining the previous frame image, it is possible to combine the image features of the previous frame image when determining the offset information corresponding to the second lane line , thus, it is beneficial to improve the accuracy of the determined offset information, and then, tracking the second lane line based on the more accurate offset information can improve the accuracy of the determined tracking result.
在一些实施例中,针对S102,可以按照以下步骤实施:In some embodiments, S102 may be implemented according to the following steps:
S102-1:基于前一帧图像、前一帧图像对应的第一热力图特征信息和待识别图像,确定所述待识别图像对应的第一图像特征。S102-1: Based on the previous frame image, the first thermal map feature information corresponding to the previous frame image, and the image to be recognized, determine the first image feature corresponding to the image to be recognized.
这里,热力图特征信息能够反映图像中各个像素点的热力值,热力图特征信息可以以热力图的形式进行表述,第一热力图特征信息对应于第一热力图,不同的车道线对应的像素点的热力值不同,例如,针对道路图像,车道线中各车道点的热力值高于道路上其他位置点的热力值。Here, the heat map feature information can reflect the heat value of each pixel in the image, and the heat map feature information can be expressed in the form of a heat map. The first heat map feature information corresponds to the first heat map, and the pixels corresponding to different lane lines Points have different heating values, for example, for a road image, the heating value of each lane point in the lane line is higher than the heating value of other points on the road.
前一帧图像对应的第一热力图特征信息可以为目标神经网络对前一帧图像进行图像识别得到的。The feature information of the first heat map corresponding to the previous frame image may be obtained by performing image recognition on the previous frame image by the target neural network.
在实施时,目标神经网络可以在获取前一帧图像、前一帧图像对应的第一热力图和待识别图像之后,分别对各个图像进行卷积操作,得到各个图像对应的卷积结果,之后,可以对卷积结果进行融合,得到初始图像特征对应的初始特征图,再利用编码器对初始图像特征进行特征编码,得到待识别图像对应的第一特征图,其中,第一特征图中包括待识别图像对应的第一图像特征,第一图像特征可以为特征向量。During implementation, the target neural network can perform convolution operations on each image after obtaining the previous frame image, the first heat map corresponding to the previous frame image, and the image to be recognized, to obtain the convolution results corresponding to each image, and then , the convolution results can be fused to obtain the initial feature map corresponding to the initial image features, and then the encoder is used to perform feature encoding on the initial image features to obtain the first feature map corresponding to the image to be recognized, wherein the first feature map includes The first image feature corresponding to the image to be recognized, the first image feature may be a feature vector.
S102-2:基于第一图像特征,确定待识别图像对应的第二热力图特征信息。S102-2: Based on the first image feature, determine the second thermal map feature information corresponding to the image to be recognized.
在实施时,目标神经网络可以对确定的第一特征图进行进一步的特征处理,得到第二热力图特征信息。例如,可以对第一特征图进行解码操作,确定出第一特征图中的各个像素点对应的热力值,进而,可以确定出待识别图像对应的第二热力图特征信息。During implementation, the target neural network may perform further feature processing on the determined first feature map to obtain feature information of the second heat map. For example, a decoding operation may be performed on the first feature map to determine the thermal value corresponding to each pixel in the first feature map, and then to determine the second thermal map feature information corresponding to the image to be recognized.
S102-3:基于第二热力图特征信息和第一图像特征,确定待识别图像中的第二车道线,以及第二车道线对应的偏移量信息。S102-3: Based on the second heat map feature information and the first image feature, determine a second lane line in the image to be recognized and offset information corresponding to the second lane line.
本步骤中,可以先基于第二热力图特征信息,确定出第二热力图中的每个像素点的热力值,进而,可以确定出第二车道线中的第二车道点对应的初始像素点,然后,可以根据确定出的每个初始像素点在第二热力图中的位置,对各个第二车道点对应的特征进行特征聚类,确定出每个第二车道点所属的第二车道线。在实施时,可以利用目标神经网络中的特征嵌入分支对各个第二车道点对应的特征进行特征聚类,确定出各个第二车道线,其中,特征嵌入分支可以为目标神经网络中的嵌入(Embedding)层。In this step, the heat value of each pixel point in the second heat map can be determined first based on the feature information of the second heat map, and then the initial pixel point corresponding to the second lane point in the second lane line can be determined , and then, according to the determined position of each initial pixel point in the second heat map, feature clustering can be performed on the features corresponding to each second lane point, and the second lane line to which each second lane point belongs can be determined . During implementation, the feature embedding branch in the target neural network can be used to perform feature clustering on the features corresponding to each second lane point to determine each second lane line, wherein the feature embedding branch can be the embedding ( Embedding) layer.
这里,基于前一帧图像、前一帧图像对应的第一热力图特征信息和待识别图像,可以使得得到的第一图像特征既可以包含待识别图像的图像特征,又可以结合前一帧图像及其对应的第一热力图特征信息,提高了第一图像特征中所包含的特征信息的丰富性,进而,基于包含丰富的特征信息的第一图像特征,可以得到准确的第二热力图特征信息,而由于车道线的热力值与其所在的图像中的其他区域的热力值不同,所以基于第二热力图特征信息,能够准确地确定出第二车道线,再利用包含前一帧图像的第一图像特征,能够准确地确定第二车道线对应的偏移量信息。Here, based on the previous frame image, the first thermal map feature information corresponding to the previous frame image, and the image to be recognized, the obtained first image features can not only contain the image features of the image to be recognized, but also can be combined with the previous frame image And its corresponding first heat map feature information improves the richness of feature information contained in the first image feature, and then, based on the first image feature containing rich feature information, accurate second heat map features can be obtained information, and because the heat value of the lane line is different from the heat value of other areas in the image where it is located, based on the feature information of the second heat map, the second lane line can be accurately determined, and then the first frame image containing the previous frame image can be used An image feature that can accurately determine offset information corresponding to the second lane line.
在一些实施方式中,还可以预先设置识别出的车道线的最大数量,在识别出的任一帧待识别图像中的车道线的数量大于最大数量的情况下,可以生成异常提示信息。也即,在目标车辆行驶过程中,其所在道路的车道线的数量有限,若识别出的车道线的数量过多,则可能存在检测出错的问题,从而,将影响自动驾驶的安全性,因此,通过设置最大车道线和生成异常提示信息的方式,能够提高自动驾驶的安全性。In some implementations, the maximum number of recognized lane lines can also be preset, and if the number of recognized lane lines in any frame of the image to be recognized is greater than the maximum number, abnormal prompt information can be generated. That is, during the driving process of the target vehicle, the number of lane lines on the road where it is located is limited. If the number of identified lane lines is too large, there may be a problem of detection error, which will affect the safety of automatic driving. Therefore, , by setting the maximum lane line and generating abnormal prompt information, the safety of automatic driving can be improved.
目标神经网络还可以根据对第一图像特征的特征处理,确定出前一帧图像中的各个 第一车道线,然后,可以基于第一车道线和第二车道线,确定出各个第一车道线和各个第二车道线之间的距离,根据确定的上述距离,可以确定出每个第二车道线分别对应的最近距离的第一车道线。进而,针对每个第二车道线,可以确定出该第二车道线相对于距离最近的第一车道线的偏移量信息。在实施时,确定第二车道线、第二车道线对应的偏移量信息以及第二热力图特征信息的步骤可以为目标神经网络中的推理模块完成的。图3为本公开实施例提供的一种目标神经网络中的推理模块对待识别图像进行图像识别,确定第二车道线的识别示意图,如图3所示,推理模块可以分别对前一帧图像对应的第一热力图31、前一帧图像32和待识别图像33进行卷积操作,得到各个图像对应的卷积结果,并对卷积结果进行融合,得到初始图像特征对应的初始特征图34,再利用编码器35对初始图像特征进行特征编码,得到待识别图像对应的第一特征图(即第一图像特征对应的第一特征图36),对第一特征图36进行进一步的特征处理,得到第二热力图特征信息,基于第二热力图特征信息,可以确定出待识别图像对应的第二热力图37,基于第二热力图37中每个像素点的热力值,可以确定出第二车道线38;推理模块还可以根据对第一图像特征的特征处理,确定出前一帧图像中的各个第一车道线,进而可以确定出该每一第二车道线38相对于距离最近的第一车道线的偏移量信息,即第二车道线对应的偏移量信息39。The target neural network can also determine the first lane lines in the previous frame image according to the feature processing of the first image features, and then determine the first lane lines and the second lane lines based on the first lane lines and the second lane lines. The distance between each second lane line, according to the above-mentioned determined distance, can determine the first lane line with the shortest distance corresponding to each second lane line. Furthermore, for each second lane line, offset information of the second lane line relative to the nearest first lane line can be determined. During implementation, the step of determining the second lane line, the offset information corresponding to the second lane line, and the second heat map feature information may be completed by an inference module in the target neural network. Figure 3 is a schematic diagram of the recognition of the second lane line by the reasoning module in a target neural network provided by an embodiment of the present disclosure for image recognition to determine the second lane line, as shown in Figure 3, the reasoning module can respectively correspond to the previous frame image The first thermal map 31, the previous frame image 32 and the image to be recognized 33 are convolved to obtain the convolution results corresponding to each image, and the convolution results are fused to obtain the initial feature map 34 corresponding to the initial image features, Utilize the encoder 35 to carry out feature encoding to the initial image feature again, obtain the first feature map corresponding to the image to be recognized (that is, the first feature map 36 corresponding to the first image feature), carry out further feature processing to the first feature map 36, Obtain the second heat map feature information, based on the second heat map feature information, can determine the second heat map 37 corresponding to the image to be recognized, based on the heat value of each pixel in the second heat map 37, can determine the second Lane lines 38; the reasoning module can also determine each first lane line in the previous frame image according to the feature processing of the first image features, and then can determine the relative distance of each second lane line 38 to the nearest first lane line. The offset information of the lane line, that is, the offset information 39 corresponding to the second lane line.
在实施时,目标神经网络可以包括推理模块和追踪模块,其中,追踪模块用于确定与待识别图像中的第二车道线相匹配的第一车道线,推理模块用于确定获取的每张待识别图像对应的车道线、车道线的偏移量信息和热力图特征信息,并在确定待识别图像对应的上述信息之后,将上述信息输入到追踪模块,以完成对待识别图像中的第二车道线的追踪。During implementation, the target neural network may include a reasoning module and a tracking module, wherein the tracking module is used to determine the first lane line that matches the second lane line in the image to be recognized, and the reasoning module is used to determine each acquired image to be recognized. Recognize the lane line corresponding to the image, the offset information of the lane line and the feature information of the heat map, and after determining the above information corresponding to the image to be recognized, input the above information into the tracking module to complete the second lane in the image to be recognized line tracking.
图4为本公开实施例提供的一种目标神经网络中的推理模块对待识别图像进行图像识别的实现流程示意图,其中,用于识别的图像41包括自动驾驶装置(目标车辆)拍摄的任一帧图像,将第N帧待识别图像42输入目标神经网络43中,可以得到第N帧待识别图像对应的图像特征44,进而可以得到第N帧待识别图像对应的热力图特征信息45和第N帧待识别图像中的车道线的偏移量信息46,特征嵌入分支47用于对车道点对应的初始像素点的特征(即图像特征44)进行特征聚类,确定当前处理的第N帧待识别图像中的车道线48,此外,在图4中,和第N帧待识别图像对应的热力图特征信息45与第N帧待识别图像一起输入目标神经网络43,用于对N+1帧待识别图像进行图像识别。FIG. 4 is a schematic diagram of an implementation process of image recognition of an image to be recognized by an inference module in a target neural network provided by an embodiment of the present disclosure, wherein the image 41 used for recognition includes any frame captured by an automatic driving device (target vehicle) image, input the Nth frame of the image to be recognized 42 into the target neural network 43, the image feature 44 corresponding to the Nth frame of the image to be recognized can be obtained, and then the thermal map feature information 45 and the Nth frame of the Nth image to be recognized can be obtained. The offset information 46 of the lane line in the image to be recognized in the frame, the feature embedding branch 47 is used to perform feature clustering on the feature of the initial pixel point corresponding to the lane point (that is, the image feature 44), and determine the Nth frame to be processed currently. Recognize the lane line 48 in the image. In addition, in FIG. 4, the thermal map feature information 45 corresponding to the Nth frame of the image to be recognized is input to the target neural network 43 together with the Nth frame of the image to be recognized, for the N+1 frame Image recognition is performed on the image to be recognized.
在一些实施例中,针对对前一帧图像进行图像识别的步骤,在前一帧图像为首帧图像的情况下,可以不利用前一帧图像的前一帧图像及其对应的热力图特征信息,直接对前一帧图像进行图像识别,确定前一帧图像对应的第二图像特征。进而,可以对第二图像特征进行处理,确定出前一帧图像对应的第一热力图特征信息,然后,可以基于第一热力图特征信息和第二图像特征,确定前一帧图像中的第一车道线。这样,首帧图像的识别处理可以直接根据该图像进行,不需要获取其对应的前一帧图像以及对应的热力图特征信息就可以得到第一车道线,提高了目标神经网络处理的灵活性。关于基于第一热力图特征信息和第二目标图像特征,确定前一帧图像中的第一车道线的步骤,可以参照上述实施例中确定第二车道线的步骤。In some embodiments, for the step of performing image recognition on the previous frame image, if the previous frame image is the first frame image, the previous frame image of the previous frame image and its corresponding thermal map feature information may not be used , directly perform image recognition on the previous frame image, and determine the second image feature corresponding to the previous frame image. Furthermore, the second image feature can be processed to determine the first thermal map feature information corresponding to the previous frame image, and then, based on the first thermal map feature information and the second image feature, the first thermal map feature information in the previous frame image can be determined. lane line. In this way, the recognition processing of the first frame image can be carried out directly based on the image, and the first lane line can be obtained without obtaining the corresponding previous frame image and the corresponding heat map feature information, which improves the flexibility of the target neural network processing. Regarding the step of determining the first lane line in the previous image frame based on the feature information of the first heat map and the feature of the second target image, reference may be made to the step of determining the second lane line in the above-mentioned embodiment.
并且,在前一帧图像为首帧图像的情况下,由于首帧图像不存在前一帧图像,所以首帧图像中的车道线不存在与前一帧图像中的车道线的偏移,也不需要将首帧图像中的车道线与前一帧图像中的车道线进行匹配,因此,可以将目标神经网络输出的首帧图像对应的偏移量信息删除。And, when the previous frame image is the first frame image, since there is no previous frame image in the first frame image, there is no offset between the lane line in the first frame image and the lane line in the previous frame image, nor The lane line in the first frame image needs to be matched with the lane line in the previous frame image, therefore, the offset information corresponding to the first frame image output by the target neural network can be deleted.
在一种实施例中,针对S103,可以按照以下步骤实施:In one embodiment, for S103, it can be implemented according to the following steps:
S103-1:基于第二热力图特征信息,确定每个第二车道线对应的初始像素点。S103-1: Based on the feature information of the second heat map, determine the initial pixel points corresponding to each second lane line.
这里,初始像素点为第二热力图中的像素点。Here, the initial pixel is the pixel in the second heat map.
在实施时,基于第二热力图特征信息,可以确定出第二热力图中的每个像素点的热力值,进而,可以确定出每个像素点是否属于车道点,并将属于车道点的像素点作为各个第二车道线对应的初始像素点。During implementation, based on the feature information of the second heat map, the heat value of each pixel in the second heat map can be determined, and then it can be determined whether each pixel belongs to a lane point, and the pixel belonging to the lane point Points are used as initial pixel points corresponding to each second lane line.
S103-2:基于每个第二车道线对应的偏移量信息和每个第二车道线对应的初始像素点,分别确定每个第二车道线对应的目标像素点。S103-2: Based on the offset information corresponding to each second lane line and the initial pixel point corresponding to each second lane line, respectively determine a target pixel point corresponding to each second lane line.
本步骤中,针对每个第二车道线,可以利用该第二车道线中各第二车道点对应的偏移量信息,对该第二车道线中的各第二车道点对应的初始像素点的位置进行调整,确定出各个初始像素点对应的目标位置,并将处于目标位置的初始像素点作为第二车道线对应的目标像素点。In this step, for each second lane line, the offset information corresponding to each second lane point in the second lane line can be used to determine the initial pixel point corresponding to each second lane point in the second lane line Adjust the position of each initial pixel to determine the target position corresponding to each initial pixel, and use the initial pixel at the target position as the target pixel corresponding to the second lane line.
在一些实施例中,可以按照以下步骤一至步骤二确定每个第二车道线对应的目标像素点:In some embodiments, the target pixel points corresponding to each second lane line can be determined according to the following steps 1 to 2:
步骤一、针对每个第二车道线,基于第二车道线对应的偏移量信息,确定第二车道线中的每个初始像素点对应的偏移值。 Step 1. For each second lane line, based on the offset information corresponding to the second lane line, determine the offset value corresponding to each initial pixel point in the second lane line.
这里,第二车道线对应的偏移量信息能够表征该第二车道线中的各个第二车道点对应的偏移值。本步骤中,针对每个第二车道线,可以根据该第二车道线对应的偏移量信息,确定出该第二车道线中的各个第二车道点对应的初始像素点的偏移值。Here, the offset information corresponding to the second lane line can represent the offset value corresponding to each second lane point in the second lane line. In this step, for each second lane line, the offset value of the initial pixel point corresponding to each second lane point in the second lane line may be determined according to the offset information corresponding to the second lane line.
步骤二、基于每个初始像素点对应的偏移值,确定第二车道线对应目标像素点。Step 2: Determine the target pixel point corresponding to the second lane line based on the offset value corresponding to each initial pixel point.
这里,可以利用每个初始像素点的偏移值,对该初始像素点的位置进行调整,确定出每个初始像素点对应的目标像素点,也即,得到该第二车道线中的各个第二车道点对应的目标像素点。进而,可以分别基于每个第二车道线对应的偏移量信息及其对应的初始像素点,确定出每个第二车道线中的各个第二车道点对应的目标像素点。Here, the offset value of each initial pixel point can be used to adjust the position of the initial pixel point to determine the target pixel point corresponding to each initial pixel point, that is, to obtain each first pixel point in the second lane line The target pixel corresponding to the second lane point. Furthermore, target pixel points corresponding to each second lane point in each second lane line may be determined based on the offset information corresponding to each second lane line and the corresponding initial pixel point respectively.
这样,基于第二车道线对应的偏移量信息,可以得到第二车道线中的每个初始像素点的偏移值,进而,可以利用偏移值对每个初始像素点的位置进行调整,得到每个初始像素点对应的准确的目标像素点In this way, based on the offset information corresponding to the second lane line, the offset value of each initial pixel point in the second lane line can be obtained, and then the position of each initial pixel point can be adjusted by using the offset value, Get the exact target pixel corresponding to each initial pixel
S103-3:基于每个第二车道线对应的目标像素点和每个第一车道线对应的像素点,确定与至少一个第二车道线匹配的第一车道线,得到至少一个第二车道线的追踪结果。S103-3: Based on the target pixel point corresponding to each second lane line and the pixel point corresponding to each first lane line, determine the first lane line matching at least one second lane line, and obtain at least one second lane line tracking results.
在实施时,针对每个第二车道线和每个第一车道线,可以根据第二车道线的各个第二车道点对应的目标像素点的位置,以及第一车道线中的各个第一车道点对应的各个像素点的位置,确定该第二车道线对应的各个目标像素点与该第一车道线中的各个像素点的初始距离。这里,初始距离可以为在同一高度上的目标像素点和第一车道线中的像素点之间的距离,也即,每个初始距离所对应的目标像素点和对应的第一车道线中的像素点位于同一高度。During implementation, for each second lane line and each first lane line, the position of the target pixel point corresponding to each second lane point of the second lane line, and each first lane in the first lane line can be The position of each pixel point corresponding to the point determines the initial distance between each target pixel point corresponding to the second lane line and each pixel point in the first lane line. Here, the initial distance can be the distance between the target pixel point at the same height and the pixel point in the first lane line, that is, the target pixel point corresponding to each initial distance and the corresponding distance between the first lane line Pixels are at the same height.
进而,可以根据确定的各个目标像素点对应的初始距离,确定出各个目标像素点对应的该第二车道线与该第一车道线的目标距离,进而,可以确定出每个第二车道线与每个第一车道线之间的目标距离。Furthermore, the target distance between the second lane line and the first lane line corresponding to each target pixel point can be determined according to the determined initial distance corresponding to each target pixel point, and then, the distance between each second lane line and the first lane line can be determined. Target distance between each first lane line.
然后,针对每个第一车道线,可以确定出与其之间的目标距离最短的第二车道线,进而,可以将该第二车道线作为与该第一车道线相匹配的车道线,得到该第二车道线的追踪结果。基于此,可以确定出每个第二车道线的追踪结果。Then, for each first lane line, the second lane line with the shortest target distance between it can be determined, and then, the second lane line can be used as the lane line matching the first lane line, and the Tracking results of the second lane line. Based on this, the tracking result of each second lane line can be determined.
或者,针对每个第二车道线和每个第一车道线,也可以根据第二车道线中的各个第二车道点对应的目标像素点的特征,以及第一车道线中的各个第一车道点对应的各个像素点的特征,确定出每个第一车道线和每个第二车道线之间的相似度,再基于确定的相似度,确定出每个第二车道线的追踪结果。Alternatively, for each second lane line and each first lane line, it may also be based on the characteristics of the target pixel points corresponding to each second lane point in the second lane line, and each first lane line in the first lane line The characteristics of each pixel point corresponding to the point determine the similarity between each first lane line and each second lane line, and then determine the tracking result of each second lane line based on the determined similarity.
这里,由于第二车道线对应的热力值与待识别图像中的其他区域的热力值不同,所以,基于第二热力图特征信息,可以确定每个像素点的热力值,进而,可以确定出第二车道线对应的初始像素点。偏移量信息能够反映预测第二车道线时的预测误差,利用确定的偏移量信息对初始像素点的位置进行调整,可以实现对预测的第二车道线的位置进行调整,确定出第二车道线在待识别图像中的准确位置,进而,利用第二车道线对应的目标像素点和每个第一车道线对应的像素点,可以准确地确定出第二车道线和各个第一车道线之间的距离,从而,基于确定的距离进行车道线匹配,能够准确地确定第二车道线对应的追踪结果。Here, since the heat value corresponding to the second lane line is different from the heat value of other regions in the image to be recognized, based on the feature information of the second heat map, the heat value of each pixel can be determined, and then the second heat value can be determined. The initial pixel corresponding to the two-lane line. The offset information can reflect the prediction error when predicting the second lane line. Using the determined offset information to adjust the position of the initial pixel point can realize the adjustment of the predicted position of the second lane line and determine the second lane line. The exact position of the lane line in the image to be recognized, and then, using the target pixel point corresponding to the second lane line and the pixel point corresponding to each first lane line, the second lane line and each first lane line can be accurately determined Therefore, performing lane line matching based on the determined distance can accurately determine the tracking result corresponding to the second lane line.
在一些实施例中,针对S103-3,可以按照以下步骤实施:In some embodiments, S103-3 may be implemented according to the following steps:
S103-3-1:针对每个第二车道线,基于第二热力图特征信息,从第二车道线的目标像素点中筛选出待匹配像素点。S103-3-1: For each second lane line, based on the feature information of the second heat map, select the pixel points to be matched from the target pixel points of the second lane line.
这里,待匹配像素点为从目标像素点中筛选出的、用于确定与第二车道线匹配的第一车道线的像素点。在实施时,针对每个第二车道线,可以先确定该第二车道线中对应的各个目标像素点对应的热力值。针对该第二车道线中的每个目标像素点,可以确定出其对应的初始像素点,再基于第二热力图特征信息,可以确定出该初始像素点对应的热力值,进而,可以将该热力值作为目标像素点对应的热力值。这样,可以确定该第二车道线中的每个目标像素点对应的热力值。Here, the pixel points to be matched are pixel points screened out from the target pixel points and used to determine the first lane line matching the second lane line. During implementation, for each second lane line, the thermal value corresponding to each target pixel point in the second lane line may be determined first. For each target pixel point in the second lane line, its corresponding initial pixel point can be determined, and then based on the second heat map feature information, the heat value corresponding to the initial pixel point can be determined, and then the The heat value is used as the heat value corresponding to the target pixel. In this way, the heat value corresponding to each target pixel point in the second lane line can be determined.
然后,可以根据每个目标像素点对应的热力值,按照由大到小的顺序,对每个目标像素点进行排序,将排序次序大于预设次序的目标像素点作为待匹配像素点。Then, according to the thermal value corresponding to each target pixel point, each target pixel point can be sorted in descending order, and the target pixel point whose sorting order is greater than the preset order is used as the pixel point to be matched.
或者,针对每个第二车道线中的每个目标像素点,可以先确定出每个目标像素点对应的置信度,进而,可以从第二车道线对应的目标像素点中筛选出置信度大于预设置信度阈值的第一目标像素点,然后,再基于第二热力图特征信息,确定每个第一目标像素点的热力值,并基于热力值,对第一目标像素点进行排序,将排序次序大于预设次序的第一目标像素点作为待匹配像素点。Or, for each target pixel point in each second lane line, the confidence degree corresponding to each target pixel point can be determined first, and then, the confidence degree greater than Preset the first target pixel of the reliability threshold, and then determine the heat value of each first target pixel based on the second heat map feature information, and sort the first target pixels based on the heat value, and put The first target pixel whose sort order is greater than the preset order is used as the pixel to be matched.
S103-3-2:基于各个第一车道线对应的像素点,分别确定每个待匹配像素点与各个第一车道线的第一距离。S103-3-2: Based on the pixel points corresponding to each first lane line, respectively determine a first distance between each to-be-matched pixel point and each first lane line.
本步骤中,针对第二车道线中的每个待匹配像素点和每个第一车道线,可以基于该待匹配像素点的位置,以及该第一车道线中各个第一车道点对应的像素点的位置,从该第一车道线对应的各个像素点中,筛选出与该待匹配像素点具有相同高度的像素点,进而,可以确定出该待匹配像素点与该像素点之间的初始距离,并将该初始距离作为与该第一车道线的第一距离。进而,可以从该第一车道线对应的各个像素点中,筛选出与每个待匹配像素点具有相同高度的像素点,并基于筛选出的像素点,确定每个待匹配像素点对应的初始距离,从而,可以得到每个待匹配像素点与各个第一车道线的第一距离。In this step, for each pixel point to be matched in the second lane line and each first lane line, based on the position of the pixel point to be matched and the pixels corresponding to each first lane point in the first lane line The position of the point, from each pixel point corresponding to the first lane line, filter out the pixel point with the same height as the pixel point to be matched, and then determine the initial distance between the pixel point to be matched and the pixel point distance, and use the initial distance as the first distance from the first lane line. Furthermore, from each pixel point corresponding to the first lane line, the pixel point having the same height as each pixel point to be matched can be screened out, and the initial pixel point corresponding to each pixel point to be matched can be determined based on the screened out pixel points. Thus, the first distance between each to-be-matched pixel point and each first lane line can be obtained.
S103-3-3:基于每个待匹配像素点对应的第一距离,确定第二车道线与各个第一车道线的第二距离。S103-3-3: Based on the first distance corresponding to each to-be-matched pixel point, determine a second distance between the second lane line and each first lane line.
这里,第二距离可以为两车道线之间的欧氏距离。针对该第二车道线和每个第一车道线,可以根据该第二车道线中的每个待匹配像素点对应于该第一车道线中的像素点的第一距离,确定出该第二车道线和该第一车道线的第二距离。在实施时,可以将该第二车道线中的每个待匹配像素点对应的第一距离的平均值作为第二距离,也可以选取最小的第一距离作为第二距离,也可以根据每个待匹配像素点对应的置信度,确定出每个待匹配像素点对应的权重,根据每个待匹配像素点对应的权重及其对应的第一距离,对第一距离进行求和,得到第二距离,这里不进行限定。Here, the second distance may be the Euclidean distance between the two lanes. For the second lane line and each first lane line, the second lane line can be determined according to the first distance between each pixel point to be matched in the second lane line corresponding to the pixel point in the first lane line. A second distance between the lane marking and the first lane marking. During implementation, the average value of the first distance corresponding to each pixel point to be matched in the second lane line can be used as the second distance, or the smallest first distance can be selected as the second distance, or according to each The confidence corresponding to the pixels to be matched determines the weight corresponding to each pixel to be matched, and according to the weight corresponding to each pixel to be matched and its corresponding first distance, the first distance is summed to obtain the second The distance is not limited here.
这样,通过筛选出的待匹配像素点确定出每个第二车道线与各个第一车道线的之间的第二距离,不需要利用第二车道线中的每个目标像素点进行计算,减少了确定第二距 离的计算量,提高了确定第二距离的速度,进而,有利于提高确定追踪结果的速度。In this way, the second distance between each second lane line and each first lane line is determined through the filtered pixel points to be matched, without using each target pixel point in the second lane line for calculation, reducing The calculation amount of determining the second distance is reduced, and the speed of determining the second distance is increased, which is beneficial to improving the speed of determining the tracking result.
S103-3-4:基于确定的第二距离,确定与第二车道线匹配的第一车道线。S103-3-4: Based on the determined second distance, determine a first lane line matching the second lane line.
本步骤中,针对每个第二车道线,可以基于该第二车道线与每个第一车道线之间的第二距离,确定出与该第二车道线具有最短的第二距离的第一车道线,将该第一车道线作为与第二车道线匹配的车道线。In this step, for each second lane line, the first lane line with the shortest second distance to the second lane line can be determined based on the second distance between the second lane line and each first lane line. Lane line, the first lane line is used as the lane line matching the second lane line.
在一些实施例中,在多个第二车道线匹配到同一个第一车道线的情况下,也即,针对任一第一车道线,在多个第二车道线匹配到该第一车道线的情况下,可以将多个第二车道线中的每个第二车道线与该第一车道线对应的第二距离进行比较;确定出最短的第二距离,确定该最短的第二距离对应的第二车道线为与该第一车道线最终相匹配的车道线。这样,可以使得确定出的两个匹配车道线之间一一对应,且不存在其他的匹配车道线,从而提高追踪结果的合理性和准确性。In some embodiments, when multiple second lane lines are matched to the same first lane line, that is, for any first lane line, when multiple second lane lines are matched to the first lane line In the case of , each second lane line in the plurality of second lane lines can be compared with the second distance corresponding to the first lane line; the shortest second distance is determined, and it is determined that the shortest second distance corresponds to The second lane line of is the lane line that finally matches the first lane line. In this way, the two determined matching lane lines can be in one-to-one correspondence, and there is no other matching lane line, thereby improving the rationality and accuracy of the tracking result.
另外,针对多个第二车道线中除最终匹配上的第二车道线以外的每个第二车道线,可以确定每个第二车道线对应的第二短的第二距离,之后,利用第二短的第二距离,以及其他的第二车道线对应的最短的第二距离,确定出上述多个第二车道线是否存在相匹配的第一车道线。从而,确定出每个第二车道线的匹配结果。In addition, for each second lane line in the plurality of second lane lines except the second lane line on the final match, the second shortest second distance corresponding to each second lane line can be determined, and then, using the second The two shortest second distances, and the shortest second distances corresponding to other second lane markings determine whether there is a matching first lane marking among the plurality of second lane markings. Therefore, the matching result of each second lane line is determined.
在一些实施例中,针对每个第二车道线,在确定与其相匹配的第一车道线之后,还可以将该第二车道线与该第一车道线之间的第二距离和第一预设值进行比较。这里,第一预设值用于表征相匹配的第一车道线和第二车道线之间的最大距离,在第二距离大于该最大距离的情况下,即使两个车道线之间的第二距离最短,也认定两个车道线为不匹配的车道线。这样,利用第一预设值对最短的第二距离对确定的最短的第一距离进行进一步的判断,提高了确定的匹配结果的合理性和准确性。In some embodiments, for each second lane line, after determining the matching first lane line, the second distance between the second lane line and the first lane line and the first preset Set value for comparison. Here, the first preset value is used to characterize the maximum distance between the matched first lane line and the second lane line. If the second distance is greater than the maximum distance, even if the second The distance is the shortest, and the two lane lines are also considered as mismatched lane lines. In this way, further judgment is made on the shortest second distance and the determined shortest first distance by using the first preset value, which improves the rationality and accuracy of the determined matching result.
在实施时,在确定第一车道线对应的最短的第二距离小于第一预设值的情况下,可以将该第一车道线作为与该最短的第二距离对应的第二车道线最终的匹配车道线。During implementation, when it is determined that the shortest second distance corresponding to the first lane line is less than the first preset value, the first lane line can be used as the final distance of the second lane line corresponding to the shortest second distance. Match lane lines.
在一些实施例中,在确定第一车道线对应的最短的第二距离大于第一预设值的情况下,可以将该最短的第二距离对应的第二车道线作为新的车道线,也即,可以确定在第一车道线中不存在与该第二车道线相匹配的车道线,从而提高了确定的追踪结果的准确性。In some embodiments, when it is determined that the shortest second distance corresponding to the first lane line is greater than the first preset value, the second lane line corresponding to the shortest second distance can be used as a new lane line, or That is, it can be determined that there is no lane line matching the second lane line in the first lane line, thereby improving the accuracy of the determined tracking result.
在一些实施例中,在确定出各个第二车道线对应的追踪结果之后,针对存储在本地数据库中的每个第一车道线,基于追踪结果,可以确定出每个第一车道线对应的未匹配成功次数。这里,未匹配成功次数用于表征第一车道线连续未与第二车道线匹配上的次数。In some embodiments, after determining the tracking results corresponding to each second lane line, for each first lane line stored in the local database, based on the tracking results, it is possible to determine the corresponding tracking results of each first lane line. The number of successful matches. Here, the number of failed matches is used to represent the number of times that the first lane line fails to match the second lane line in a row.
然后,针对上述第一车道线,可以将未匹配成功次数与第二预设值进行比较,在确定未匹配成功次数大于第二预设值的情况下,说明该第一车道线已经连续多次未匹配成功,说明该第二车道线大概率已经消失在目标车辆的行驶范围内,则可以删除该第一车道线,从而可以减少存储的车道线的数量,减少存储空间的占用,同时能够提高检测效率。Then, for the above-mentioned first lane line, the number of failed matches can be compared with the second preset value, and if it is determined that the number of failed matches is greater than the second preset value, it means that the first lane line has been repeated multiple times If the matching is not successful, it means that the second lane line has probably disappeared within the driving range of the target vehicle, then the first lane line can be deleted, thereby reducing the number of stored lane lines, reducing the storage space occupied, and at the same time improving detection efficiency.
若未匹配成功次数小于第二预设值的情况下,则可以继续存储该第一车道线,以用于下一次的匹配。If the number of failed matching successes is less than the second preset value, the first lane line may be continuously stored for the next matching.
另外,在每个第一车道线被第二车道线匹配上之后,可以将该第一车道线对应的未匹配成功次数置为初始值,其中,初始值可以为0。In addition, after each first lane line is matched by the second lane line, the number of unmatched successes corresponding to the first lane line may be set as an initial value, where the initial value may be 0.
图5为本公开实施例提供的一种目标神经网络中的追踪模块确定出第二车道线对应的追踪结果的实现流程示意图,其中,追踪模块51在获取到推理模块输入的车道线以及车道线的偏移量信息之后,可以先确定该车道线是否为首帧图像对应的车道线(S501),也即判断该车道线对应的待识别图像是否为首帧图像;如果是,则说明未存储有与该车 道线相匹配的其他车道线,则可以直接生成每个车道线对应的车道线标识信息(S502),并且,在待识别图像为首帧图像的情况下,推理模块可以只输入首帧图像的对应的车道线,不输入车道线的偏移量信息。本地数据库52用于存储每个对待识别图像进行图像识别之后确定的车道线的信息,存储的可以为车道线上的车道点信息,车道线编号信息等;如果否,则可以按照上述各实施例中介绍的车道线跟踪方法,确定获取到的推理模块输入的车道线的追踪结果,如图5所示,可以按照如下步骤S521至步骤S524,确定获取到的推理模块输入的车道线的追踪结果:步骤S521,基于第N帧待识别图像对应的热力图特征信息,确定出各个车道线对应的待匹配像素点;步骤S522,确定第N帧待识别图像对应的各个车道线和前一帧图像对应的各个车道线之间的第二距离;步骤S523,确定第N帧待识别图像对应的各个车道线对应的追踪结果;步骤S524,利用第N帧待识别图像对应的车道线替换前一帧图像对应的匹配上的车道线。Fig. 5 is a schematic diagram of the implementation flow of the tracking module in the target neural network to determine the tracking result corresponding to the second lane line provided by the embodiment of the present disclosure, wherein the tracking module 51 acquires the lane line and the lane line input by the reasoning module After the offset information, it can first determine whether the lane line is the lane line corresponding to the first frame image (S501), that is, judge whether the image to be recognized corresponding to the lane line is the first frame image; For other lane lines matched by the lane line, the lane line identification information corresponding to each lane line can be directly generated (S502), and, when the image to be recognized is the first frame image, the reasoning module can only input the first frame image For the corresponding lane line, the offset information of the lane line is not input. The local database 52 is used to store the information of each lane line determined after the image recognition of the image to be recognized, which can be the lane point information on the lane line, the lane line number information, etc.; The lane line tracking method introduced in , determines the obtained tracking result of the lane line input by the reasoning module, as shown in Figure 5, can follow the steps S521 to S524 as follows to determine the obtained track result of the lane line input by the reasoning module : Step S521, based on the thermal map feature information corresponding to the image to be recognized in the Nth frame, determine the pixel points to be matched corresponding to each lane line; Step S522, determine each lane line corresponding to the image to be recognized in the Nth frame and the previous frame image The second distance between the corresponding lane lines; Step S523, determine the tracking result corresponding to each lane line corresponding to the N frame image to be recognized; Step S524, replace the previous frame with the lane line corresponding to the N frame image to be recognized The image corresponds to the lane line on the match.
在一些实施例中,针对S102,在获取到待识别图像和前一帧图像之后,还可以利用预设采样倍数,分别对待识别图像和前一帧图像进行下采样,得到待识别图像对应的、采样后的新的待识别图像,以及前一帧图像对应的、采样后的新的前一帧图像。例如,可以利用4倍下采样的方式或8倍下采样的方式,对待识别图像和前一帧图像进行下采样。In some embodiments, for S102, after the image to be recognized and the previous frame of image are acquired, the image to be recognized and the previous frame of image can also be down-sampled respectively by using preset sampling multiples to obtain the image corresponding to the image to be recognized, The sampled new image to be recognized, and the sampled new previous frame image corresponding to the previous frame image. For example, the image to be recognized and the previous frame image may be down-sampled by using a 4-fold down-sampling manner or an 8-fold down-sampling manner.
然后,可以利用目标神经网络对新的待识别图像、新的前一帧图像以及前一帧图像对应的第一热力图分别进行卷积,得到新的待识别图像对应的第一图像特征。然后,利用第一图像特征,确定待识别图像中的第二车道线,以及第二车道线对应的偏移量信息。Then, the target neural network can be used to convolve the new image to be recognized, the new previous frame image, and the first heat map corresponding to the previous frame image to obtain the first image features corresponding to the new image to be recognized. Then, the second lane line in the image to be recognized and the offset information corresponding to the second lane line are determined by using the first image feature.
这里,由于第一热力图为对前一帧图像进行图像识别得到的,所以在对前一帧图像进行图像识别的过程中,已经完成了对前一帧图像的下采样,进而,得到的第一热力图为采样后的前一帧图像对应的第一热力图,可以直接对第一热力图进行卷积处理,不需要再对第一热力图进行下采样处理。Here, since the first heat map is obtained by image recognition of the previous frame image, in the process of image recognition of the previous frame image, the downsampling of the previous frame image has been completed, and further, the obtained first A heat map is the first heat map corresponding to the sampled previous frame image, and the first heat map can be directly convolved without downsampling the first heat map.
这样,通过下采样处理的方式,可以实现得到的新的图像(和新的前一帧图像和新的待识别图像)中的每个像素点可以表征与采样倍数相对应的多个像素点的信息,进而,利用新的图像中的每个像素点的信息,有利于确定出更准确的第二车道线及其对应的偏移量信息。In this way, through down-sampling, each pixel in the obtained new image (and the new previous frame image and the new image to be recognized) can represent the number of pixels corresponding to the sampling multiple information, and further, using the information of each pixel in the new image is beneficial to determine a more accurate second lane line and its corresponding offset information.
在一些实施例中,由于本公开实施例所提供的车道线跟踪方法由目标神经网络执行,目标神经网络可以经过训练以提高预测精度,因此,本公开实施还提供了一种利用多张样本识别图像对目标神经网络进行训练的步骤:In some embodiments, since the lane line tracking method provided by the embodiments of the present disclosure is executed by the target neural network, the target neural network can be trained to improve prediction accuracy. Therefore, the implementation of the present disclosure also provides a method for identifying Steps to train the target neural network on images:
步骤T1:将样本识别图像、样本识别图像的前一帧样本图像对应的第一预测热力图特征信息和前一帧样本图像输入目标神经网络,经过目标神经网络对输入的信息进行处理,得到样本识别图像对应的第二预测热力图特征信息、样本识别图像对应的预测车道线、预测车道线对应的预测偏移量信息以及每个预测车道线对应的预测追踪结果。Step T1: Input the sample identification image, the first predicted heat map feature information corresponding to the sample image of the previous frame of the sample identification image, and the sample image of the previous frame into the target neural network, and process the input information through the target neural network to obtain the sample The second predicted heat map feature information corresponding to the recognition image, the predicted lane line corresponding to the sample recognition image, the predicted offset information corresponding to the predicted lane line, and the predicted tracking result corresponding to each predicted lane line.
其中,多张样本识别图像可以是同一视频片段中的图像,每张样本图像中包括车道线,或者,多张样本识别图像也可以为单独拍摄的包括车道线的图像。Wherein, the multiple sample identification images may be images in the same video clip, and each sample image includes lane lines, or the multiple sample identification images may also be images including lane lines captured separately.
示例性的,多张样本识别图像可以是自动驾驶车辆拍摄的包含可见车道线的视频片段中的图像。Exemplarily, the plurality of sample identification images may be images in a video clip containing visible lane lines captured by the autonomous vehicle.
前一帧样本图像对应的第一预测热力图特征信息可以为目标神经网络对前一帧样本图像进行图像识别得到的,预测车道线可以为目标神经网络输出的样本识别图像中的预测车道线。The feature information of the first predicted heat map corresponding to the sample image of the previous frame may be obtained by image recognition of the sample image of the previous frame by the target neural network, and the predicted lane line may be the predicted lane line in the sample recognition image output by the target neural network.
在实施时,在对样本识别图像进行图像识别时,可以将样本识别图像、样本识别图像的前一帧样本图像对应的第一预测热力图特征信息和前一帧样本图像输入目标神经网络,利用目标神经网络中的推理模块对输入的上述信息进行处理,确定样本识别图像 对应的样本预测图像特征,再基于样本预测图像特征,确定样本识别图像对应的第二预测热力图特征。进而,可以基于第二预测热力图特征和样本预测图像特征,确定出样本识别图像对应的预测车道线、预测车道线对应的预测偏移量信息,然后利用追踪模块,确定每个预测车道线对应的预测追踪结果。During implementation, when performing image recognition on the sample recognition image, the sample recognition image, the first predicted heat map feature information corresponding to the previous frame sample image of the sample recognition image, and the previous frame sample image can be input into the target neural network, using The reasoning module in the target neural network processes the above-mentioned input information, determines the sample prediction image features corresponding to the sample recognition image, and then determines the second prediction heat map feature corresponding to the sample recognition image based on the sample prediction image features. Furthermore, the predicted lane line corresponding to the sample recognition image and the predicted offset information corresponding to the predicted lane line can be determined based on the second predicted heat map feature and the sample predicted image feature, and then the tracking module is used to determine the corresponding predictive tracking results.
步骤T2:基于第二预测热力图特征信息、样本识别图像对应的标注热力图特征信息、预测车道线、样本识别图像对应的标注车道线、预测偏移量信息、样本识别图像对应的标注偏移量信息、以及每个预测车道线对应的预测追踪结果和每个标注车道线对应的标注追踪结果,确定损失值。Step T2: Based on the second predicted heat map feature information, the marked heat map feature information corresponding to the sample recognition image, the predicted lane line, the marked lane line corresponding to the sample recognition image, the predicted offset information, and the mark offset corresponding to the sample recognition image The amount information, the predicted tracking result corresponding to each predicted lane line and the labeled tracking result corresponding to each marked lane line are used to determine the loss value.
在实施时,可以基于第二预测热力图特征和样本识别图像对应的标注热力图特征信息(即真值),确定目标神经网络中的热力图分支对应的第一预测损失值,基于预测车道线和样本识别图像对应的标注车道线,确定目标神经网络中的特征嵌入分支的第二预测损失值,基于预测偏移量信息以及样本识别图像对应的标注偏移量信息,确定目标神经网络中的偏移量预测分支对应的第三预测损失值,并基于每个预测车道线对应的预测追踪结果以及每个标准车道线对应的标注追踪结果,确定目标神经网络对应的第四预测损失值。之后,可以第一预测损失值、第二预测损失值、第三预测损失值、第四预测损失值作为目标神经网络对应的损失值。During implementation, the first predicted loss value corresponding to the heat map branch in the target neural network can be determined based on the second predicted heat map feature and the marked heat map feature information (that is, the true value) corresponding to the sample recognition image, based on the predicted lane line The labeled lane line corresponding to the sample recognition image determines the second prediction loss value of the feature embedding branch in the target neural network, and determines the target neural network based on the predicted offset information and the labeled offset information corresponding to the sample recognition image. The third prediction loss value corresponding to the offset prediction branch, and based on the prediction tracking result corresponding to each predicted lane line and the label tracking result corresponding to each standard lane line, determine the fourth prediction loss value corresponding to the target neural network. Afterwards, the first predicted loss value, the second predicted loss value, the third predicted loss value, and the fourth predicted loss value may be used as the corresponding loss value of the target neural network.
步骤T3:根据损失值调整目标神经网络的网络参数值,直到满足预设训练截止条件,得到训练好的目标神经网络。Step T3: Adjust the network parameter value of the target neural network according to the loss value until the preset training cut-off condition is met, and a trained target neural network is obtained.
这里,预设训练截止条件可以为迭代训练的轮数大于预设轮数和/或训练得到的目标神经网络的预测精度达到预设精度。Here, the preset training cut-off condition may be that the number of rounds of iterative training is greater than the preset number of rounds and/or the prediction accuracy of the trained target neural network reaches the preset accuracy.
在实施时,可以根据损失值对目标神经网络进行迭代训练,以实现对目标神经网络的网络参数值的调整;在确定满足预设训练截止条件的情况下,完成对目标神经网络的训练,将此时得到的网络参数值作为目标神经网络对应的目标网络参数值,将此时训练完成的目标神经网络作为得到训练好的目标神经网络。During implementation, the target neural network can be iteratively trained according to the loss value, so as to realize the adjustment of the network parameter value of the target neural network; when it is determined that the preset training cut-off condition is satisfied, the training of the target neural network is completed, and the The network parameter value obtained at this time is used as the target network parameter value corresponding to the target neural network, and the target neural network that has been trained at this time is used as the trained target neural network.
这样,利用第二预测热力图特征、预测车道线、预测偏移量信息以及每个预测车道线对应的预测追踪结果,可以分别确定出上述所有信息对应的目损失值,再利用损失值调整目标神经网络的网络参数值,可以提高目标神经网络的预测精度,从而,能够提高目标神经网络输出上述每个信息的准确训。In this way, using the second predicted heat map features, predicted lane lines, predicted offset information, and the predicted tracking results corresponding to each predicted lane line, the target loss values corresponding to all the above information can be determined respectively, and then the target loss value can be adjusted using the loss value The network parameter value of the neural network can improve the prediction accuracy of the target neural network, thereby improving the accurate training of each of the above information output by the target neural network.
在一些实施例中,预测偏移量信息包括预测车道线在至少一个图像方向上的偏移量。In some embodiments, the predicted offset information includes the predicted offset of the lane line in at least one image direction.
这里,预测偏移量信息可以包括在图像宽度方向上的偏移量和/或在图像高度方向上的偏移量。这样,通过输出车道线在一个方向上的偏移量,能够有效的减少目标神经网络在输出偏移量信息时所需要处理的计算量,有利于提高信息处理的速度和效率。Here, the predicted offset information may include an offset in the image width direction and/or an offset in the image height direction. In this way, by outputting the offset of the lane line in one direction, it is possible to effectively reduce the amount of calculation that the target neural network needs to process when outputting the offset information, which is conducive to improving the speed and efficiency of information processing.
在实施时,针对每个预测车道线,目标神经网络中的偏移量分支在确定预测车道线对应的预测偏移量时,可以固定一个方向,输出预测车道线相较于前一帧图像中的车道线在另一个方向上的预测偏移量。例如,固定图像高度方向,针对每个预测车道线,在每个图像高度上,可以输出其相对于前一帧图像中的车道线在图像宽度方向上的偏移量。During implementation, for each predicted lane line, the offset branch in the target neural network can fix a direction when determining the predicted offset corresponding to the predicted lane line, and output the predicted lane line compared with the previous frame image The predicted offset of the lane line in the other direction. For example, the image height direction is fixed, and for each predicted lane line, at each image height, its offset relative to the lane line in the previous frame image in the image width direction can be output.
图6为本公开实施例所提供的一种输出的预测偏移量信息的示意图。其中,X表示在图像宽度方向上的偏移量,Y图像高度方向上的每个图像高度,每个预测车道线在X方向上的偏移量可以随着图像高度的递增或递减呈渐变的形式变化。在图6中,不同的偏移量可以用不同的颜色表示,不同的颜色所表征的在X方向上的偏移量可以如颜色指示条60所示,图6中示出了预测车道线1、预测车道线2、预测车道线3和预测车道线4分别对应的预测偏移量611、612、613、614。FIG. 6 is a schematic diagram of an output prediction offset information provided by an embodiment of the present disclosure. Among them, X represents the offset in the image width direction, each image height in the Y image height direction, and the offset of each predicted lane line in the X direction can be gradually changed with the increase or decrease of the image height Form changes. In FIG. 6 , different offsets can be represented by different colors, and the offsets represented by different colors in the X direction can be shown in the color indicator bar 60. FIG. 6 shows the predicted lane line 1 , predicted offsets 611 , 612 , 613 , and 614 corresponding to predicted lane line 2 , predicted lane line 3 , and predicted lane line 4 , respectively.
此外,在X方向上的偏移量的单位可以与下采样倍数相关,例如,在预设采样倍数为8倍时,单位可以为8×偏移量;在预设采样倍数为4倍时,单位可以为4×偏移量。In addition, the unit of the offset in the X direction can be related to the downsampling multiple, for example, when the preset sampling multiple is 8 times, the unit can be 8×offset; when the preset sampling multiple is 4 times, The unit can be 4×offset.
另外,在利用各个预测车道线的预测偏移量信息和各个预测车道线对应的标准偏移量信息构建目标神经网络关于预测偏移量信息的过程中,还可以利用标注偏移量信息和预测偏移量信息构建预测平滑偏移量的平滑损失,并利用平滑损失对目标神经网络进行训练,以减少训练好的目标神经网络在输出待识别图像中的车道线的预测偏移量信息的过程中,出现同一条车道线邻近的几个车道点对应的偏移量出现跳跃式变化的问题。In addition, in the process of using the predicted offset information of each predicted lane line and the standard offset information corresponding to each predicted lane line to construct the target neural network for the predicted offset information, the labeled offset information and predicted The offset information constructs the smoothing loss for predicting the smoothing offset, and uses the smoothing loss to train the target neural network to reduce the process of outputting the predicted offset information of the lane line in the image to be recognized by the trained target neural network In , there is a problem that the offset corresponding to several lane points adjacent to the same lane line changes suddenly.
例如,针对车道线5对应的预测偏移量信息,在图像高度10至15对应的偏移量信息上,不会出现图像高度10对应的车道点的偏移量为-2.5,图像高度12对应的车道点的偏移量为-1.5,图像高度13对应的车道点的偏移量为-1,图像高度14对应的车道点的偏移量为-6,图像高度15对应的车道点的偏移量为0的情况,这里,由于图像高度14对应的车道点的偏移量为-6与图像高度15对应的偏移量0,以及与图像高度12对应的偏移量-1.5之间的偏移量差值,远大于图像高度10对应的偏移量-2.5与图像高度12对应的偏移量-1.5之间的偏移量差值,以及图像高度12对应的偏移量-1.5与图像高度13对应的偏移量-1之间的偏移量差值,因此,将出现车道线5对应的预测偏移量信息在图像高度14对应的车道点的偏移量上出现跳跃式变化,而这种情况在真实的上下帧车道线对应的偏移量信息中为不合理的情况,真实的上下帧车道线对应的偏移量信息应该为平滑渐变的。For example, for the predicted offset information corresponding to lane line 5, in the offset information corresponding to image height 10 to 15, the offset of the lane point corresponding to image height 10 is -2.5, and the offset of image height 12 corresponds to The offset of the lane point is -1.5, the offset of the lane point corresponding to the image height 13 is -1, the offset of the lane point corresponding to the image height 14 is -6, and the offset of the lane point corresponding to the image height 15 is The case where the offset is 0, here, since the offset of the lane point corresponding to the image height 14 is -6, the offset 0 corresponding to the image height 15, and the offset between -1.5 corresponding to the image height 12 The offset difference is much larger than the offset difference between the offset -2.5 corresponding to the image height 10 and the offset -1.5 corresponding to the image height 12, and the offset -1.5 corresponding to the image height 12 and The offset difference between the offset -1 corresponding to the image height 13, therefore, there will be a jump change in the predicted offset information corresponding to the lane line 5 in the offset of the lane point corresponding to the image height 14 , and this situation is unreasonable in the offset information corresponding to the real upper and lower frame lane lines, and the offset information corresponding to the real upper and lower frame lane lines should be smooth and gradual.
因此,利用平滑损失对目标神经网络进行训练,可以提高目标神经网络预测平滑的能力,也即,训练好的目标神经网络可以对在图像高度14对应的车道点上预测的偏移量-6进行调整,如调整后输出偏移量可以为-0.5,从而,得到如图6所示的每条车道线对应的平滑的预测偏移量信息。Therefore, using the smoothing loss to train the target neural network can improve the ability of the target neural network to predict smoothness, that is, the trained target neural network can predict the offset -6 at the lane point corresponding to the image height 14 After adjustment, for example, the adjusted output offset may be -0.5, so that the smooth prediction offset information corresponding to each lane line as shown in FIG. 6 is obtained.
在一些实施例中,还可以通过以下步骤获取样本识别图像的前一帧样本图像:对样本识别图像执行随机平移和/或旋转操作,得到样本识别图像的前一帧样本图像。In some embodiments, the sample image of the previous frame of the sample identification image may also be acquired by the following steps: performing random translation and/or rotation operations on the sample identification image to obtain the sample image of the previous frame of the sample identification image.
这里,随机平移操作可以实现对样本图像中的像素点的位置的平移,也即完成对样本识别图像中的车道线所对应的像素点的位置的平移,得到与样本识别图像不同的图像。随机旋转操作可以实现对样本图像中的像素点的位置的旋转,同样可以得到与样本识别图像不同的图像。这样,通过随机平移和/或旋转操作,可以实现对样本识别图像中的像素点的位置的改变,得到模拟出的前一帧图像,从而,只需要获取一帧样本识别图像就能够完成对目标神经网络的训练,提高了神经网络训练的灵活性。Here, the random translation operation can realize the translation of the positions of the pixels in the sample image, that is, complete the translation of the positions of the pixels corresponding to the lane lines in the sample recognition image, and obtain an image different from the sample recognition image. The random rotation operation can realize the rotation of the positions of the pixels in the sample image, and can also obtain an image different from the sample recognition image. In this way, through random translation and/or rotation operations, the position of the pixels in the sample recognition image can be changed, and the simulated previous frame image can be obtained. Therefore, only one frame of sample recognition image can be obtained to complete the target recognition. The training of neural network improves the flexibility of neural network training.
图7为本公开实施例提供的一种对目标神经网络进行训练的实现流程示意图。图7中示出了利用多张样本识别图像对目标神经网络进行训练的过程,其中,目标神经网络70在对多张样本识别图像中的第N帧样本识别图像进行图像识别之后,可以确定第N帧样本识别图像对应的预测热力图特征信息701、样本识别图像对应的各个预测车道线702、各个预测车道线的预测偏移量信息703以及各个预测车道线的预测追踪结果704,继而,可以基于第N帧样本识别图像对应的预测热力图特征信息701和第N帧样本识别图像对应的标注热力图特征信息,确定目标神经网络在第N帧样本识别图像对应的热力图预测损失值711,可以基于第N帧样本识别图像对应的预测车道线702和第N帧样本识别图像对应的标注车道线,确定目标神经网络在第N帧样本识别图像对应的车道线预测损失值712,可以基于第N帧样本识别图像对应的各个预测车道线的预测偏移量信息703和各个预测车道线对应的标注偏移量信息,确定目标神经网络在第N帧样本识别图像对应的偏移量信息预测损失值713,并可以基于确定的第N帧样本识别图像对应的各个预测车道线的预测追踪结果704和各个预测车道线对应的标注追踪结果,确定目标神经网络在第N帧样本识别图像对应的追踪损失值714,同理,按照上述步骤,可以确定第N+1帧样本识别图像对应的预测热力图特征信息721、样本识别图像对应的各个预测车道线722、各个预测车道线的预测偏移量信息723以及各个预测车道线的预测追踪 结果724,以及热力图预测损失值731、车道线预测损失值732、偏移量信息预测损失值733、追踪损失值734。之后,可以利用第N帧样本识别图像对应的各个损失值和第N+1帧样本识别图像对应的各个损失值一起对目标神经网络70进行迭代训练,以实现对目标神经网络的网络参数值的调整;在满足训练截止条件的情况下,得到训练好的目标神经网络。FIG. 7 is a schematic diagram of an implementation process for training a target neural network provided by an embodiment of the present disclosure. 7 shows the process of using multiple sample recognition images to train the target neural network, wherein, after the target neural network 70 performs image recognition on the Nth frame of sample recognition images in the multiple sample recognition images, it can determine the first The predicted heat map feature information 701 corresponding to N frames of sample recognition images, each predicted lane line 702 corresponding to the sample recognition image, the predicted offset information 703 of each predicted lane line, and the predicted tracking result 704 of each predicted lane line, and then, it can be Based on the predicted heat map feature information 701 corresponding to the sample recognition image of the Nth frame and the marked heat map feature information corresponding to the sample recognition image of the Nth frame, determine the heat map prediction loss value 711 corresponding to the sample recognition image of the Nth frame of the target neural network, Based on the predicted lane line 702 corresponding to the sample recognition image of the Nth frame and the marked lane line corresponding to the sample recognition image of the Nth frame, the lane line prediction loss value 712 corresponding to the sample recognition image of the Nth frame of the target neural network can be determined, which can be based on the The predicted offset information 703 of each predicted lane line corresponding to the N-frame sample recognition image and the label offset information corresponding to each predicted lane line, and determine the offset information prediction loss corresponding to the N-frame sample recognition image of the target neural network value 713, and based on the predicted tracking results 704 of each predicted lane line corresponding to the determined sample recognition image of the Nth frame and the label tracking results corresponding to each predicted lane line, determine the tracking corresponding to the sample recognition image of the Nth frame of the target neural network Loss value 714, similarly, according to the above steps, the predicted heat map feature information 721 corresponding to the N+1th frame sample recognition image, each predicted lane line 722 corresponding to the sample recognition image, and the predicted offset of each predicted lane line can be determined Information 723 and predicted tracking results 724 of each predicted lane line, as well as heat map predicted loss value 731 , lane line predicted loss value 732 , offset information predicted loss value 733 , and tracking loss value 734 . Afterwards, the target neural network 70 can be iteratively trained using the respective loss values corresponding to the Nth frame sample recognition image and the respective loss values corresponding to the N+1th frame sample recognition image, so as to realize the network parameter value of the target neural network. Adjustment; when the training cut-off condition is met, the trained target neural network is obtained.
另外,在图7中,如果前一帧图像为首帧图像,可以不利用该帧图像对应的偏移量构建损失,只构建除偏移量以外的其他信息的损失对目标神经网络进行训练,对于非首帧图像,则需要构建上述各个损失,以用于对目标神经网络的训练。In addition, in Figure 7, if the previous frame image is the first frame image, you can not use the offset corresponding to the frame image to construct the loss, but only construct the loss of information other than the offset to train the target neural network. For For images other than the first frame, it is necessary to construct the above-mentioned losses for training the target neural network.
本领域技术人员可以理解,在上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。Those skilled in the art can understand that in the above method, the writing order of each step does not imply a strict execution order and constitutes any limitation on the implementation process, and the specific execution order of each step should be determined by its function and possible internal logic.
基于同一发明构思,本公开实施例中还提供了与车道线跟踪方法对应的车道线跟踪装置,由于本公开实施例中的装置解决问题的原理与本公开实施例上述车道线跟踪方法相似,因此装置的实施可以参见方法的实施。Based on the same inventive concept, the embodiment of the present disclosure also provides a lane line tracking device corresponding to the lane line tracking method. Since the problem-solving principle of the device in the embodiment of the present disclosure is similar to that of the above-mentioned lane line tracking method in the embodiment of the present disclosure, therefore The implementation of the device can refer to the implementation of the method.
图8为本公开实施例提供的一种车道线跟踪装置的组成结构示意图,如图8所示,该装置包括:Fig. 8 is a schematic diagram of the composition and structure of a lane line tracking device provided by an embodiment of the present disclosure. As shown in Fig. 8, the device includes:
获取部分801,被配置为获取待识别图像的前一帧图像,以及对所述前一帧图像进行图像识别得到的第一车道线;The acquisition part 801 is configured to acquire a previous frame image of the image to be recognized, and a first lane line obtained by performing image recognition on the previous frame image;
第一确定部分802,被配置为基于所述前一帧图像和所述待识别图像,确定所述待识别图像中的第二车道线,以及所述第二车道线相对于距离最近的第一车道线的偏移量信息;The first determining part 802 is configured to determine the second lane line in the image to be recognized based on the previous frame image and the image to be recognized, and the relative distance between the second lane line and the first Lane line offset information;
第二确定部分803,被配置为基于每个所述第二车道线对应的偏移量信息和每个所述第一车道线,确定与至少一个所述第二车道线匹配的第一车道线,得到至少一个所述第二车道线的追踪结果。The second determining part 803 is configured to determine the first lane line matching at least one of the second lane lines based on the offset information corresponding to each of the second lane lines and each of the first lane lines , obtaining at least one tracking result of the second lane line.
在一些实施方式中,所述第一确定部分802,还被配置为:基于所述前一帧图像、所述前一帧图像对应的第一热力图特征信息和所述待识别图像,确定所述待识别图像对应的第一图像特征;基于所述第一图像特征,确定所述待识别图像对应的第二热力图特征信息;基于所述第二热力图特征信息和所述第一图像特征,确定所述待识别图像中的第二车道线,以及所述第二车道线对应的偏移量信息。In some implementations, the first determining part 802 is further configured to: determine the identified The first image feature corresponding to the image to be identified; based on the first image feature, determine the second thermal map feature information corresponding to the image to be identified; based on the second thermal map feature information and the first image feature , determining a second lane line in the image to be recognized, and offset information corresponding to the second lane line.
在一些实施方式中,所述第二确定部分803,还被配置为:基于所述第二热力图特征信息,确定每个所述第二车道线对应的初始像素点;基于每个所述第二车道线对应的偏移量信息和每个所述第二车道线对应的初始像素点,分别确定每个所述第二车道线对应的目标像素点;基于每个所述第二车道线对应的目标像素点和每个所述第一车道线对应的像素点,确定与至少一个所述第二车道线匹配的第一车道线,得到至少一个所述第二车道线的追踪结果。In some implementations, the second determining part 803 is further configured to: determine the initial pixel points corresponding to each of the second lane lines based on the second heat map feature information; The offset information corresponding to the two lane lines and the initial pixel points corresponding to each of the second lane lines respectively determine the target pixel points corresponding to each of the second lane lines; based on each of the second lane lines corresponding to The target pixel point and the pixel point corresponding to each of the first lane lines, determine the first lane line matching at least one of the second lane lines, and obtain a tracking result of at least one of the second lane lines.
在一些实施方式中,所述第二确定部分803,还被配置为:针对每个第二车道线,基于所述第二车道线对应的偏移量信息,确定所述第二车道线中的每个初始像素点对应的偏移值;基于每个所述初始像素点对应的偏移值,确定所述第二车道线对应的目标像素点。In some implementations, the second determination part 803 is further configured to: for each second lane line, based on the offset information corresponding to the second lane line, determine the An offset value corresponding to each initial pixel point; determining a target pixel point corresponding to the second lane line based on the offset value corresponding to each initial pixel point.
在一些实施方式中,所述第二确定部分803,还被配置为:针对每个第二车道线,基于所述第二热力图特征信息,从所述第二车道线的目标像素点中筛选出待匹配像素点;基于各个第一车道线对应的像素点,分别确定每个所述待匹配像素点与各个所述第一车道线的第一距离;基于每个所述待匹配像素点对应的所述第一距离,确定所述第二车道线与各个所述第一车道线的第二距离;基于确定的所述第二距离,确定与所述第二车道 线匹配的第一车道线。In some implementations, the second determining part 803 is further configured to: for each second lane line, based on the second heat map feature information, filter the target pixel points of the second lane line Obtain the pixel points to be matched; based on the pixel points corresponding to each first lane line, respectively determine the first distance between each pixel point to be matched and each of the first lane lines; Determine the second distance between the second lane line and each of the first lane lines; based on the determined second distance, determine the first lane line matching the second lane line .
在一些实施方式中,所述第二确定部分803,还被配置为:在确定与所述第二车道线匹配的第一车道线之后,在多个所述第二车道线匹配到同一个第一车道线的情况下,确定与该第一车道线距离最短的第二车道线与该第一车道线相匹配。In some implementations, the second determining part 803 is further configured to: after determining the first lane line that matches the second lane line, when multiple second lane lines match the same first lane line, In the case of a lane line, it is determined that the second lane line with the shortest distance from the first lane line matches the first lane line.
在一些实施方式中,所述第二确定部分803,还被配置为在最短的第二距离小于第一预设值的情况下,将最短的第二距离对应的第一车道线作为与所述第二车道线相匹配的车道线。In some implementations, the second determining part 803 is further configured to use the first lane line corresponding to the shortest second distance as the first lane line corresponding to the shortest second distance when the shortest second distance is less than the first preset value. The second lane line matches the lane line.
在一些实施方式中,所述第二确定部分803,还被配置为在最短的第二距离大于第一预设值的情况下,将所述第二车道线作为新的车道线。In some implementations, the second determining part 803 is further configured to use the second lane line as a new lane line when the shortest second distance is greater than a first preset value.
在一些实施方式中,所述装置还包括:删除部分804,被配置为:在得到所述追踪结果之后,针对未匹配成功的每个所述第一车道线,确定所述第一车道线的未匹配成功次数;未匹配成功次数用于表征对应的第一车道线在连续多帧待识别图像中未与第二车道线匹配上的次数;在所述未匹配成功次数大于第二预设值的情况下,删除所述第一车道线。In some implementations, the device further includes: a deleting part 804 configured to: after obtaining the tracking result, for each of the first lane lines that are not successfully matched, determine the The number of unsuccessful matches; the number of unsuccessful matches is used to characterize the number of times that the corresponding first lane line does not match the second lane line in consecutive multiple frames of images to be recognized; when the number of successful mismatches is greater than the second preset value case, delete the first lane line.
在一些实施方式中,所述车道线跟踪方法由目标神经网络执行,所述目标神经网络采用多张样本识别图像训练得到。In some implementations, the lane line tracking method is implemented by a target neural network, and the target neural network is trained by using multiple sample recognition images.
在一些实施方式中,所述装置还包括:训练部分805,被配置为:采用以下步骤训练所述目标神经网络:将所述样本识别图像、所述样本识别图像的前一帧样本图像对应的第一预测热力图特征信息和所述前一帧样本图像输入所述目标神经网络,所述目标神经网络输出所述样本识别图像对应的第二预测热力图特征信息、所述样本识别图像对应的预测车道线、所述预测车道线对应的预测偏移量信息以及每个所述预测车道线对应的预测追踪结果;基于所述第二预测热力图特征信息、所述样本识别图像对应的标注热力图特征信息、所述预测车道线、所述样本识别图像对应的标注车道线、所述预测偏移量信息、所述样本识别图像对应的标注偏移量信息、以及每个所述预测车道线对应的预测追踪结果和每个标注车道线对应的标注追踪结果,确定损失值;根据所述损失值调整所述目标神经网络的网络参数值,直到满足预设训练截止条件,得到训练好的目标神经网络。In some implementations, the device further includes: a training part 805 configured to: train the target neural network by adopting the following steps: The first predicted heat map feature information and the previous frame sample image are input into the target neural network, and the target neural network outputs the second predicted heat map feature information corresponding to the sample recognition image, and the sample recognition image corresponding to Predicted lane lines, predicted offset information corresponding to the predicted lane lines, and predicted tracking results corresponding to each of the predicted lane lines; Map feature information, the predicted lane line, the labeled lane line corresponding to the sample recognition image, the predicted offset information, the labeled offset information corresponding to the sample recognized image, and each predicted lane line Determine the loss value for the corresponding prediction tracking result and the label tracking result corresponding to each labeled lane line; adjust the network parameter value of the target neural network according to the loss value until the preset training cut-off condition is met, and the trained target is obtained Neural Networks.
在一些实施方式中,所述预测偏移量信息包括所述预测车道线在至少一个图像方向上的偏移量。In some implementations, the predicted offset information includes an offset of the predicted lane line in at least one image direction.
在一些实施方式中,所述训练部分805,还被配置为通过以下步骤获取所述样本识别图像的前一帧样本图像:对所述样本识别图像执行随机平移和/或旋转操作,得到所述样本识别图像的前一帧图像。In some implementations, the training part 805 is further configured to acquire the sample image of the previous frame of the sample recognition image through the following steps: performing random translation and/or rotation operations on the sample recognition image to obtain the The image in the previous frame of the sample recognition image.
在一些实施方式中,所述获取部分801,还被配置为:在所述前一帧图像为首帧图像的情况下,对所述前一帧图像进行图像识别,确定所述前一帧图像对应的第二图像特征;基于所述第二图像特征,确定所述前一帧图像对应的第一热力图特征信息;基于所述第一热力图特征信息和所述第二图像特征,确定所述前一帧图像中的第一车道线。In some implementations, the acquisition part 801 is further configured to: if the previous frame image is the first frame image, perform image recognition on the previous frame image, and determine that the previous frame image corresponds to The second image feature; based on the second image feature, determine the first thermal map feature information corresponding to the previous frame image; based on the first thermal map feature information and the second image feature, determine the The first lane line in the previous image frame.
在一些实施方式中,所述第一确定部分802,还被配置为:按照预设采样倍数,分别对所述待识别图像和所述前一帧图像进行下采样处理,得到新的待识别图像和新的前一帧图像;基于所述新的前一帧图像和所述新的待识别图像,确定待识别图像中的第二车道线,以及所述第二车道线相对于距离最近的第一车道线的偏移量信息。In some implementations, the first determining part 802 is further configured to: respectively perform downsampling processing on the image to be recognized and the image of the previous frame according to a preset sampling multiple to obtain a new image to be recognized and a new previous frame image; based on the new previous frame image and the new image to be recognized, determine the second lane line in the image to be recognized, and the second lane line relative to the nearest first Offset information of a lane line.
关于装置中的各部分的处理流程、以及各部分之间的交互流程的描述可以参照上述方法实施例中的相关说明。For the description of the processing flow of each part in the device and the interaction flow between each part, reference may be made to the relevant descriptions in the foregoing method embodiments.
需要说明的是,在本公开实施例以及其他的实施例中,“部分”可以是部分电路、部分处理器、部分程序或软件等等,当然也可以是单元,还可以是模块也可以是非模块 化的。It should be noted that, in the embodiments of the present disclosure and other embodiments, a "part" may be a part of a circuit, a part of a processor, a part of a program or software, etc., of course, it may also be a unit, and it may also be a module or a non-module of.
本公开实施例还提供了一种计算机设备,如图9所示,为本公开实施例提供的一种计算机设备结构示意图,包括:An embodiment of the present disclosure also provides a computer device, as shown in FIG. 9 , which is a schematic structural diagram of a computer device provided by an embodiment of the present disclosure, including:
处理器91和存储器92;所述存储器92存储有处理器91可执行的机器可读指令,处理器91用于执行存储器92中存储的机器可读指令,所述机器可读指令被处理器91执行时,处理器91执行下述步骤:S101:获取待识别图像的前一帧图像,以及对所述前一帧图像进行图像识别得到的第一车道线;S102:基于前一帧图像和待识别图像,确定待识别图像中的第二车道线,以及第二车道线相对于距离最近的第一车道线的偏移量信息以及S103:基于每个第二车道线对应的偏移量信息和每个第一车道线,确定与至少一个第二车道线匹配的第一车道线,得到至少一个第二车道线的追踪结果。上述存储器92包括内存921和外部存储器922;这里的内存921也称内存储器,用于暂时存放处理器91中的运算数据,以及与硬盘等外部存储器922交换的数据,处理器91通过内存921与外部存储器922进行数据交换。 Processor 91 and memory 92; the memory 92 stores machine-readable instructions executable by the processor 91, the processor 91 is used to execute the machine-readable instructions stored in the memory 92, and the machine-readable instructions are executed by the processor 91 During execution, the processor 91 performs the following steps: S101: Acquire the previous frame image of the image to be recognized, and the first lane line obtained by performing image recognition on the previous frame image; S102: Based on the previous frame image and the image to be recognized Recognize the image, determine the second lane line in the image to be recognized, and the offset information of the second lane line relative to the nearest first lane line and S103: Based on the offset information corresponding to each second lane line and For each first lane line, determine the first lane line matching the at least one second lane line, and obtain a tracking result of the at least one second lane line. Above-mentioned storer 92 comprises memory 921 and external memory 922; Here memory 921 is also called internal memory, is used for temporarily storing computing data in the processor 91, and the data exchanged with external memory 922 such as hard disk, processor 91 communicates with memory 921 through memory 921. The external memory 922 performs data exchange.
上述指令的执行过程可以参考本公开实施例中所述的车道线跟踪方法的步骤。For the execution process of the above instructions, reference may be made to the steps of the lane line tracking method described in the embodiments of the present disclosure.
本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述方法实施例中所述的车道线跟踪方法的步骤。其中,该存储介质可以是易失性或非易失的计算机可读取存储介质。An embodiment of the present disclosure further provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps of the lane line tracking method described in the foregoing method embodiments are executed. Wherein, the storage medium may be a volatile or non-volatile computer-readable storage medium.
本公开实施例所提供的车道线跟踪方法的计算机程序产品,包括存储了程序代码的计算机可读存储介质,所述程序代码包括的指令可用于执行上述方法实施例中所述的车道线跟踪方法的步骤,具体可参见上述方法实施例。该计算机程序产品可以具体通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品具体体现为计算机存储介质,在另一个可选实施例中,计算机程序产品具体体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。The computer program product of the lane line tracking method provided by the embodiments of the present disclosure includes a computer-readable storage medium storing program codes, and the instructions included in the program code can be used to execute the lane line tracking method described in the above method embodiments For details, please refer to the above method embodiment. The computer program product can be specifically realized by means of hardware, software or a combination thereof. In an optional embodiment, the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK) etc. wait.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的装置的具体工作过程,可以参考前述方法实施例中的对应过程。在本公开所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。Those skilled in the art can clearly understand that for the convenience and brevity of description, for the specific working process of the device described above, reference can be made to the corresponding process in the foregoing method embodiments. In the several embodiments provided in the present disclosure, it should be understood that the disclosed devices and methods may be implemented in other ways. The device embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components can be combined. Or some features can be ignored, or not implemented. In another point, the mutual coupling or direct coupling or communication connection shown or discussed may be through some communication interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个处理器可执行的非易失的计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。If the functions are realized in the form of software function units and sold or used as independent products, they can be stored in a non-volatile computer-readable storage medium executable by a processor. Based on this understanding, the technical solution of the present disclosure is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in various embodiments of the present disclosure. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disc and other media that can store program codes. .
最后应说明的是:以上所述实施例,仅为本公开的示例性实施方式,用以说明本公开的技术方案,而非对其限制,本公开的保护范围并不局限于此,尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本公开实施例技术方案的精神和范围,都应涵盖在本公开的保护范围之内。Finally, it should be noted that: the above-described embodiments are only exemplary implementations of the present disclosure, and are used to illustrate the technical solutions of the present disclosure, rather than to limit them, and the scope of protection of the present disclosure is not limited thereto. Although referring to The foregoing embodiments have described the present disclosure in detail, and those of ordinary skill in the art should understand that any person familiar with the art within the technical scope of the present disclosure can still carry out the technical solutions described in the foregoing embodiments Modifications or changes can be easily thought of, or equivalent replacements for some of the technical features; and these modifications, changes or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions of the embodiments of the present disclosure, and should be included in this disclosure. within the scope of public protection.
工业实用性Industrial Applicability
本公开实施例提供了一种车道线跟踪方法、装置、计算机设备、存储介质和计算机程序产品,其中,该方法包括:获取待识别图像的前一帧图像,以及对前一帧图像进行图像识别得到的第一车道线;基于前一帧图像和所述待识别图像,确定待识别图像中的第二车道线,以及第二车道线相对于距离最近的第一车道线的偏移量信息;基于每个第二车道线对应的偏移量信息和每个第一车道线,确定与至少一个第二车道线匹配的第一车道线,得到至少一个第二车道线的追踪结果。通过本公开实施例,可以在对待识别图像中的第二车道线进行检测和追踪的过程中,提高确定的第二车道线、以及第二车道线对应的偏移量信息的准确性,从而能够提高确定的追踪结果的准确性。Embodiments of the present disclosure provide a lane line tracking method, device, computer equipment, storage medium, and computer program product, wherein the method includes: acquiring a previous frame image of the image to be recognized, and performing image recognition on the previous frame image The obtained first lane line; based on the previous frame image and the image to be recognized, determine the second lane line in the image to be recognized, and the offset information of the second lane line relative to the nearest first lane line; Based on the offset information corresponding to each second lane line and each first lane line, determine the first lane line matching the at least one second lane line, and obtain a tracking result of the at least one second lane line. Through the embodiments of the present disclosure, in the process of detecting and tracking the second lane line in the image to be recognized, the accuracy of the determined second lane line and the offset information corresponding to the second lane line can be improved, so that Improve the accuracy of determined trace results.

Claims (19)

  1. 一种车道线跟踪方法,包括:A lane line tracking method, comprising:
    获取待识别图像的前一帧图像,以及对所述前一帧图像进行图像识别得到的第一车道线;Acquiring the previous frame image of the image to be recognized, and the first lane line obtained by performing image recognition on the previous frame image;
    基于所述前一帧图像和所述待识别图像,确定所述待识别图像中的第二车道线,以及所述第二车道线相对于距离最近的第一车道线的偏移量信息;Based on the previous frame image and the image to be recognized, determine a second lane line in the image to be recognized, and offset information of the second lane line relative to the nearest first lane line;
    基于每个所述第二车道线对应的偏移量信息和每个所述第一车道线,确定与至少一个所述第二车道线匹配的第一车道线,得到至少一个所述第二车道线的追踪结果。Based on the offset information corresponding to each of the second lane lines and each of the first lane lines, determine a first lane line that matches at least one of the second lane lines, and obtain at least one of the second lanes Line tracking results.
  2. 根据权利要求1所述的方法,其中,所述基于所述前一帧图像和所述待识别图像,确定所述待识别图像中的第二车道线,以及所述第二车道线相对于距离最近的第一车道线的偏移量信息,包括:The method according to claim 1, wherein, based on the previous frame image and the image to be recognized, determining the second lane line in the image to be recognized, and the relative distance between the second lane line The offset information of the nearest first lane line, including:
    基于所述前一帧图像、所述前一帧图像对应的第一热力图特征信息和所述待识别图像,确定所述待识别图像对应的第一图像特征;Based on the previous frame image, the first thermal map feature information corresponding to the previous frame image, and the image to be recognized, determine the first image feature corresponding to the image to be recognized;
    基于所述第一图像特征,确定所述待识别图像对应的第二热力图特征信息;Based on the first image feature, determine the second thermal map feature information corresponding to the image to be recognized;
    基于所述第二热力图特征信息和所述第一图像特征,确定所述待识别图像中的第二车道线,以及所述第二车道线对应的偏移量信息。Based on the second heat map feature information and the first image feature, determine a second lane line in the image to be recognized and offset information corresponding to the second lane line.
  3. 根据权利要求2所述的方法,其中,所述基于每个所述第二车道线对应的偏移量信息和每个所述第一车道线,确定与至少一个所述第二车道线匹配的第一车道线,得到至少一个所述第二车道线的追踪结果,包括:The method according to claim 2, wherein, based on the offset information corresponding to each of the second lane lines and each of the first lane lines, it is determined to match at least one of the second lane lines The first lane line, obtaining at least one tracking result of the second lane line, includes:
    基于所述第二热力图特征信息,确定每个所述第二车道线对应的初始像素点;Based on the feature information of the second heat map, determine the initial pixel points corresponding to each of the second lane lines;
    基于每个所述第二车道线对应的偏移量信息和每个所述第二车道线对应的初始像素点,分别确定每个所述第二车道线对应的目标像素点;Based on the offset information corresponding to each of the second lane lines and the initial pixel points corresponding to each of the second lane lines, respectively determine the target pixel points corresponding to each of the second lane lines;
    基于每个所述第二车道线对应的目标像素点和每个所述第一车道线对应的像素点,确定与至少一个所述第二车道线匹配的第一车道线,得到至少一个所述第二车道线的追踪结果。Based on the target pixel points corresponding to each of the second lane lines and the pixel points corresponding to each of the first lane lines, determine the first lane line that matches at least one of the second lane lines, and obtain at least one of the Tracking results of the second lane line.
  4. 根据权利要求3所述的方法,其中,所述基于每个所述第二车道线对应的偏移量信息和每个所述第二车道线对应的初始像素点,分别确定每个所述第二车道线对应的目标像素点,包括:The method according to claim 3, wherein, based on the offset information corresponding to each of the second lane lines and the initial pixel points corresponding to each of the second lane lines, each of the second lane lines is determined respectively The target pixel corresponding to the two-lane line, including:
    针对每个第二车道线,基于所述第二车道线对应的偏移量信息,确定所述第二车道线中的每个初始像素点对应的偏移值;For each second lane line, based on the offset information corresponding to the second lane line, determine an offset value corresponding to each initial pixel point in the second lane line;
    基于每个所述初始像素点对应的偏移值,确定所述第二车道线对应的目标像素点。Based on the offset value corresponding to each of the initial pixel points, the target pixel point corresponding to the second lane line is determined.
  5. 根据权利要求3或4所述的方法,其中,所述基于每个所述第二车道线对应的目标像素点和每个所述第一车道线对应的像素点,确定与至少一个所述第二车道线匹配的第一车道线,包括:The method according to claim 3 or 4, wherein, based on the target pixel points corresponding to each of the second lane lines and the pixel points corresponding to each of the first lane lines, it is determined that at least one of the second lane lines corresponds to The first lane line matched by the second lane line, including:
    针对每个第二车道线,基于所述第二热力图特征信息,从所述第二车道线的目标像素点中筛选出待匹配像素点;For each second lane line, based on the second heat map feature information, select the pixel points to be matched from the target pixel points of the second lane line;
    基于各个第一车道线对应的像素点,分别确定每个所述待匹配像素点与各个所述第一车道线的第一距离;Based on the pixel points corresponding to each first lane line, respectively determine a first distance between each of the pixel points to be matched and each of the first lane lines;
    基于每个所述待匹配像素点对应的所述第一距离,确定所述第二车道线与各个所述第一车道线的第二距离;determining a second distance between the second lane line and each of the first lane lines based on the first distance corresponding to each pixel point to be matched;
    基于确定的所述第二距离,确定与所述第二车道线匹配的第一车道线。Based on the determined second distance, a first lane line matching the second lane line is determined.
  6. 根据权利要求5所述的方法,其中,在确定与所述第二车道线匹配的第一车道线之后,还包括:The method according to claim 5, wherein, after determining the first lane line matching the second lane line, further comprising:
    在多个所述第二车道线匹配到同一个第一车道线的情况下,确定与该第一车道线距离最短的第二车道线与该第一车道线相匹配。In the case that multiple second lane lines match the same first lane line, it is determined that the second lane line with the shortest distance to the first lane line matches the first lane line.
  7. 根据权利要求5或6所述的方法,其中,所述基于确定的所述第二距离,确定与所述第二车道线匹配的第一车道线,包括:The method according to claim 5 or 6, wherein said determining the first lane line matching the second lane line based on the determined second distance comprises:
    在最短的第二距离小于第一预设值的情况下,将最短的第二距离对应的第一车道线作为与所述第二车道线相匹配的车道线。If the shortest second distance is less than the first preset value, the first lane line corresponding to the shortest second distance is used as the lane line matching the second lane line.
  8. 根据权利要求7所述的方法,其中,所述基于确定的所述第二距离,确定与所述第二车道线匹配的第一车道线,还包括:The method according to claim 7, wherein said determining a first lane line matching said second lane line based on said determined second distance further comprises:
    在最短的第二距离大于第一预设值的情况下,将所述第二车道线作为新的车道线。If the shortest second distance is greater than the first preset value, the second lane line is used as a new lane line.
  9. 根据权利要求1至8中任一项所述的方法,其中,在得到所述追踪结果之后,还包括:The method according to any one of claims 1 to 8, wherein, after obtaining the tracking result, further comprising:
    针对未匹配成功的每个所述第一车道线,确定所述第一车道线的未匹配成功次数;未匹配成功次数用于表征对应的第一车道线在连续多帧待识别图像中未与第二车道线匹配上的次数;For each of the first lane lines that are not successfully matched, determine the number of times that the first lane lines are not matched successfully; The number of matching times on the second lane line;
    在所述未匹配成功次数大于第二预设值的情况下,删除所述第一车道线。If the number of failed matching successes is greater than a second preset value, the first lane line is deleted.
  10. 根据权利要求1至9任一项所述的方法,其中,所述车道线跟踪方法由目标神经网络执行,所述目标神经网络采用多张样本识别图像训练得到。The method according to any one of claims 1 to 9, wherein the lane line tracking method is executed by a target neural network, and the target neural network is trained by using a plurality of sample recognition images.
  11. 根据权利要求10所述的方法,其中,采用以下步骤训练所述目标神经网络:The method of claim 10, wherein the target neural network is trained using the steps of:
    将所述样本识别图像、所述样本识别图像的前一帧样本图像对应的第一预测热力图特征信息和所述前一帧样本图像输入所述目标神经网络,所述目标神经网络输出所述样本识别图像对应的第二预测热力图特征信息、所述样本识别图像对应的预测车道线、所述预测车道线对应的预测偏移量信息以及每个所述预测车道线对应的预测追踪结果;Input the sample identification image, the first predicted heat map feature information corresponding to the sample image of the previous frame of the sample identification image, and the sample image of the previous frame into the target neural network, and the target neural network outputs the The second predicted heat map feature information corresponding to the sample identification image, the predicted lane line corresponding to the sample identification image, the predicted offset information corresponding to the predicted lane line, and the predicted tracking result corresponding to each predicted lane line;
    基于所述第二预测热力图特征信息、所述样本识别图像对应的标注热力图特征信息、所述预测车道线、所述样本识别图像对应的标注车道线、所述预测偏移量信息、所述样本识别图像对应的标注偏移量信息、以及每个所述预测车道线对应的预测追踪结果和每个标注车道线对应的标注追踪结果,确定损失值;Based on the feature information of the second predicted heat map, the feature information of the marked heat map corresponding to the sample recognition image, the predicted lane line, the marked lane line corresponding to the sample recognition image, the predicted offset information, the The label offset information corresponding to the sample recognition image, and the predicted tracking results corresponding to each of the predicted lane lines and the labeled tracking results corresponding to each labeled lane line, to determine the loss value;
    根据所述损失值调整所述目标神经网络的网络参数值,直到满足预设训练截止条件,得到训练好的目标神经网络。Adjusting the network parameter value of the target neural network according to the loss value until the preset training cut-off condition is met, and a trained target neural network is obtained.
  12. 根据权利要求11所述的方法,其中,所述预测偏移量信息包括所述预测车道线在至少一个图像方向上的偏移量。The method according to claim 11, wherein the predicted offset information comprises an offset of the predicted lane line in at least one image direction.
  13. 根据权利要求11所述的方法,其中,所述样本识别图像的前一帧样本图像通过以下步骤获取:The method according to claim 11, wherein the sample image of the previous frame of the sample identification image is obtained by the following steps:
    对所述样本识别图像执行随机平移和/或旋转操作,得到所述样本识别图像的前一帧样本图像。A random translation and/or rotation operation is performed on the sample identification image to obtain a sample image of a previous frame of the sample identification image.
  14. 根据权利要求1至13中任一项所述的方法,其中,获取对所述前一帧图像进行图像识别得到的第一车道线,包括:The method according to any one of claims 1 to 13, wherein obtaining the first lane line obtained by performing image recognition on the previous frame image comprises:
    在所述前一帧图像为首帧图像的情况下,对所述前一帧图像进行图像识别,确定所述前一帧图像对应的第二图像特征;When the previous frame image is the first frame image, perform image recognition on the previous frame image, and determine the second image feature corresponding to the previous frame image;
    基于所述第二图像特征,确定所述前一帧图像对应的第一热力图特征信息;Based on the second image feature, determine the first thermal map feature information corresponding to the previous frame image;
    基于所述第一热力图特征信息和所述第二图像特征,确定所述前一帧图像中的第一车道线。Based on the first heat map feature information and the second image feature, determine the first lane line in the previous frame image.
  15. 根据权利要求1至14中任一项所述的方法,其中,所述基于所述前一帧图像和所述待识别图像,确定所述待识别图像中的第二车道线,以及所述第二车道线相对于距离最近的第一车道线的偏移量信息,包括:The method according to any one of claims 1 to 14, wherein, based on the previous frame image and the image to be recognized, determining the second lane line in the image to be recognized, and the second lane line in the image to be recognized The offset information of the second lane line relative to the nearest first lane line, including:
    按照预设采样倍数,分别对所述待识别图像和所述前一帧图像进行下采样处理,得到新的待识别图像和新的前一帧图像;respectively performing downsampling processing on the image to be recognized and the image of the previous frame according to a preset sampling multiple to obtain a new image to be recognized and a new image of the previous frame;
    基于所述新的前一帧图像和所述新的待识别图像,确定待识别图像中的第二车道线,以及所述第二车道线对应于距离最近的第一车道线的偏移量信息。Based on the new previous frame image and the new image to be recognized, determine a second lane line in the image to be recognized, and the offset information of the second lane line corresponding to the closest first lane line .
  16. 一种车道线跟踪装置,包括:A lane line tracking device, comprising:
    获取部分,被配置为获取待识别图像的前一帧图像,以及对所述前一帧图像进行图像识别得到的第一车道线;The acquiring part is configured to acquire a previous frame image of the image to be recognized, and a first lane line obtained by performing image recognition on the previous frame image;
    第一确定部分,被配置为基于所述前一帧图像和所述待识别图像,确定所述待识别图像中的第二车道线,以及所述第二车道线相对于距离最近的第一车道线的偏移量信息;The first determination part is configured to determine, based on the previous frame image and the image to be recognized, a second lane line in the image to be recognized, and a relative distance between the second lane line and the closest first lane Line offset information;
    第二确定部分,被配置为基于每个所述第二车道线对应的偏移量信息和每个所述第一车道线,确定与至少一个所述第二车道线匹配的第一车道线,得到至少一个所述第二车道线的追踪结果。The second determining part is configured to determine a first lane line matching at least one of the second lane lines based on the offset information corresponding to each of the second lane lines and each of the first lane lines, A tracking result of at least one second lane line is obtained.
  17. 一种计算机设备,包括:处理器、存储器,所述存储器存储有所述处理器可执行的机器可读指令,所述处理器用于执行所述存储器中存储的机器可读指令,所述机器可读指令被所述处理器执行时,所述处理器执行如权利要求1至15任意一项所述的车道线跟踪方法的步骤。A computer device, comprising: a processor and a memory, the memory stores machine-readable instructions executable by the processor, the processor is configured to execute the machine-readable instructions stored in the memory, and the machine can When the read instruction is executed by the processor, the processor executes the steps of the lane line tracking method according to any one of claims 1 to 15.
  18. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被计算机设备运行时,所述计算机设备执行如权利要求1至15任意一项所述的车道线跟踪方法的步骤。A computer-readable storage medium, a computer program is stored on the computer-readable storage medium, and when the computer program is run by a computer device, the computer device executes the lane marking according to any one of claims 1 to 15 The steps of the trace method.
  19. 一种计算机程序产品,所述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,所述计算机程序被计算机读取并执行时,实现权利要求1至15任一项所述方法中的步骤。A computer program product, the computer program product comprising a non-transitory computer-readable storage medium storing a computer program, when the computer program is read and executed by a computer, the method described in any one of claims 1 to 15 is realized in the steps.
PCT/CN2022/110308 2021-11-29 2022-08-04 Lane line tracking method and apparatus, and computer device, storage medium and computer program product WO2023093124A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111433344.8A CN114120281A (en) 2021-11-29 2021-11-29 Lane line tracking method and device, computer equipment and storage medium
CN202111433344.8 2021-11-29

Publications (1)

Publication Number Publication Date
WO2023093124A1 true WO2023093124A1 (en) 2023-06-01

Family

ID=80371415

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/110308 WO2023093124A1 (en) 2021-11-29 2022-08-04 Lane line tracking method and apparatus, and computer device, storage medium and computer program product

Country Status (2)

Country Link
CN (1) CN114120281A (en)
WO (1) WO2023093124A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114120281A (en) * 2021-11-29 2022-03-01 上海商汤临港智能科技有限公司 Lane line tracking method and device, computer equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112560684A (en) * 2020-12-16 2021-03-26 北京百度网讯科技有限公司 Lane line detection method, lane line detection device, electronic apparatus, storage medium, and vehicle
CN113033497A (en) * 2021-04-30 2021-06-25 平安科技(深圳)有限公司 Lane line recognition method, device, equipment and computer-readable storage medium
CN114120281A (en) * 2021-11-29 2022-03-01 上海商汤临港智能科技有限公司 Lane line tracking method and device, computer equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112560684A (en) * 2020-12-16 2021-03-26 北京百度网讯科技有限公司 Lane line detection method, lane line detection device, electronic apparatus, storage medium, and vehicle
CN113033497A (en) * 2021-04-30 2021-06-25 平安科技(深圳)有限公司 Lane line recognition method, device, equipment and computer-readable storage medium
CN114120281A (en) * 2021-11-29 2022-03-01 上海商汤临港智能科技有限公司 Lane line tracking method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN114120281A (en) 2022-03-01

Similar Documents

Publication Publication Date Title
CN110443818B (en) Graffiti-based weak supervision semantic segmentation method and system
CN107506740B (en) Human body behavior identification method based on three-dimensional convolutional neural network and transfer learning model
Huang et al. Multiple target tracking by learning-based hierarchical association of detection responses
CN110084836B (en) Target tracking method based on deep convolution characteristic hierarchical response fusion
CN110796026A (en) Pedestrian re-identification method based on global feature stitching
CN111694917B (en) Vehicle abnormal track detection and model training method and device
CN113344857B (en) Defect detection network training method, defect detection method and storage medium
JP2020126613A (en) Method for automatically evaluating labeling reliability of training image for use in deep learning network to analyze image, and reliability-evaluating device using the same
CN110751076B (en) Vehicle detection method
CN111767847A (en) Pedestrian multi-target tracking method integrating target detection and association
WO2023093124A1 (en) Lane line tracking method and apparatus, and computer device, storage medium and computer program product
CN110533661A (en) Adaptive real-time closed-loop detection method based on characteristics of image cascade
CN114913498A (en) Parallel multi-scale feature aggregation lane line detection method based on key point estimation
Munir et al. LDNet: End-to-end lane marking detection approach using a dynamic vision sensor
CN114419323A (en) Cross-modal learning and domain self-adaptive RGBD image semantic segmentation method
CN113312973A (en) Method and system for extracting features of gesture recognition key points
Li et al. Visual object tracking via multi-stream deep similarity learning networks
CN113076891B (en) Human body posture prediction method and system based on improved high-resolution network
CN116985793B (en) Automatic driving safety control system and method based on deep learning algorithm
CN112581502A (en) Target tracking method based on twin network
CN113129336A (en) End-to-end multi-vehicle tracking method, system and computer readable medium
CN116872961A (en) Control system for intelligent driving vehicle
CN117152504A (en) Space correlation guided prototype distillation small sample classification method
CN116958057A (en) Strategy-guided visual loop detection method
CN114663839B (en) Method and system for re-identifying blocked pedestrians

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22897219

Country of ref document: EP

Kind code of ref document: A1