WO2022028383A1 - 车道线标注、检测模型确定、车道线检测方法及相关设备 - Google Patents

车道线标注、检测模型确定、车道线检测方法及相关设备 Download PDF

Info

Publication number
WO2022028383A1
WO2022028383A1 PCT/CN2021/110183 CN2021110183W WO2022028383A1 WO 2022028383 A1 WO2022028383 A1 WO 2022028383A1 CN 2021110183 W CN2021110183 W CN 2021110183W WO 2022028383 A1 WO2022028383 A1 WO 2022028383A1
Authority
WO
WIPO (PCT)
Prior art keywords
lane
lane line
line
points
outline
Prior art date
Application number
PCT/CN2021/110183
Other languages
English (en)
French (fr)
Inventor
李莹
肖映彩
袁慧珍
刘聪
虢旭升
Original Assignee
长沙智能驾驶研究院有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 长沙智能驾驶研究院有限公司 filed Critical 长沙智能驾驶研究院有限公司
Publication of WO2022028383A1 publication Critical patent/WO2022028383A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads

Definitions

  • the present application relates to the technical field of intelligent driving, and in particular, to a lane line marking, detection model determination, lane line detection method and related equipment.
  • lane line detection has become the basic link of car assisted driving and unmanned driving. Accurate detection and identification of lane lines is an important prerequisite for functions such as lane departure warning, lane keeping, and lane change.
  • the detected lane lines are prone to sticking to each other at the far end. The sticking of lane lines will make it impossible to accurately fit the curve, which will lead to the accuracy of the final lane line detection results. lower.
  • a lane line marking method comprising:
  • Lane line semantic label map According to the position information of the points to be drawn corresponding to the lane lines and the thickness information of the marked lines at the points to be drawn, draw the marked lines corresponding to the lane lines, and obtain the scene graph corresponding to the lane lines. Lane line semantic label map.
  • a lane marking device comprising:
  • the label point information acquisition module is used to obtain the position information of the label points on each lane line based on the lane line scene graph;
  • a to-be-drawn point information determination module configured to determine the location information of the to-be-drawn points corresponding to each of the lane lines according to the location information of the marked points on each of the lane lines;
  • a marking line information determination module configured to determine the thickness information of the marking lines corresponding to each lane line at each of the to-be-drawn points based on the position information of the to-be-drawn points corresponding to each of the lane lines;
  • An annotation line drawing module is used to draw the annotation lines corresponding to the lane lines according to the position information of the points to be drawn corresponding to the lane lines and the thickness information of the annotation lines at the points to be drawn, and obtain the The lane line semantic label map corresponding to the lane line scene graph described above.
  • a computer device includes a memory and a processor, the memory stores a computer program, and the processor implements the following steps when executing the computer program:
  • Lane line semantic label map According to the position information of the points to be drawn corresponding to the lane lines and the thickness information of the marked lines at the points to be drawn, draw the marked lines corresponding to the lane lines, and obtain the scene graph corresponding to the lane lines. Lane line semantic label map.
  • Lane line semantic label map According to the position information of the points to be drawn corresponding to the lane lines and the thickness information of the marked lines at the points to be drawn, draw the marked lines corresponding to the lane lines, and obtain the scene graph corresponding to the lane lines. Lane line semantic label map.
  • the above lane line marking method, device, computer equipment and storage medium obtain the position information of the point to be drawn corresponding to the lane line based on the position information of the marked point on the lane line in the lane line scene graph, and obtain the position information of the point to be drawn corresponding to the lane line through
  • the position information determines the thickness information of the marked line corresponding to the lane line at each point to be drawn, so that the thickness of the line at the point to be drawn at different positions is different.
  • the lane line marking line drawn by the thickness information at the point to be drawn can realize the near end thick and the far end thin, thereby reducing the adhesion of the lane line marked in the lane line semantic label map at the far end, and improving the subsequent lane line detection accuracy.
  • a method for determining a lane line detection model comprising:
  • a lane line detection model is determined.
  • a device for determining a lane line detection model comprising:
  • the sample acquisition module is used to acquire the scene graph of the sample lane line
  • a lane line labeling module configured to perform lane line labeling on the sample lane line scene graph by using the above lane line labeling method, and obtain a lane line semantic label map corresponding to the sample lane line scene graph;
  • a model training module configured to train the generative adversarial network model to be trained based on the sample lane line scene graph and the lane line semantic label graph, and obtain a trained generative adversarial network model
  • the model determination module is used for determining the lane line detection model according to the generator in the trained generative confrontation network model.
  • a computer device includes a memory and a processor, the memory stores a computer program, and the processor implements the following steps when executing the computer program:
  • a lane line detection model is determined.
  • a lane line detection model is determined.
  • the determination method, device, computer equipment and storage medium of the above-mentioned lane line detection model are different from the general generative adversarial network that uses the semantic map as input to generate the natural scene graph.
  • the generative adversarial network uses the semantic map as input to generate the natural scene graph.
  • the real lane line scene graph is used as the Semantic segmentation is performed on the input to generate a semantic map of lane lines, which is conducive to removing complex backgrounds, and can generate lane lines in occluded areas, with better robustness and adaptability.
  • a lane line detection method comprising:
  • the lane line detection is performed on the to-be-detected lane line scene graph to obtain a lane line semantic map, and the lane line semantic map includes the position information of each pixel. ;
  • the lane line in the to-be-detected lane line scene map is determined.
  • a lane line detection device the device includes:
  • the picture acquisition module to be detected is used to acquire the scene map of the lane line to be detected
  • the lane line detection module is used for the lane line detection model determined by the above-mentioned method for determining the lane line detection model to perform lane line detection on the to-be-detected lane line scene graph to obtain a lane line semantic map, where the lane line semantic map includes: The position information of each pixel;
  • the lane line determination module is configured to determine the lane line in the to-be-detected lane line scene map based on the position information of each pixel in the lane line semantic map.
  • a computer device includes a memory and a processor, the memory stores a computer program, and the processor implements the following steps when executing the computer program:
  • the lane line detection is performed on the to-be-detected lane line scene graph to obtain a lane line semantic map, and the lane line semantic map includes the position information of each pixel. ;
  • the lane line in the to-be-detected lane line scene map is determined.
  • the lane line detection is performed on the to-be-detected lane line scene graph to obtain a lane line semantic map, and the lane line semantic map includes the position information of each pixel. ;
  • the lane line in the to-be-detected lane line scene map is determined.
  • the above lane line detection method, device, computer equipment and storage medium through the generator in the generative adversarial network model, generate the lane line semantic map corresponding to the lane line scene map to be detected, which can realize end-to-end detection, eliminating the need for lane line detection.
  • the scene graph is preprocessed and calculated, the detection distance is longer, the manual adjustment parameters are less, and the robustness is better.
  • the lane line detection method based on the semantic segmentation neural network based on the probability map it can only detect a fixed number of lane lines and It is impossible to generate lane lines in occluded areas.
  • Using generative adversarial network for lane line detection can detect all lane lines in the lane line scene graph at the same time, and can generate lane lines in occluded areas, so as to improve the accuracy of lane line detection and adapt to Most complex road scenes.
  • FIG. 1 is a schematic flowchart of a method for marking lane lines in one embodiment
  • FIG. 2 is a schematic diagram of a lane line scene graph in one embodiment
  • FIG. 3 is a schematic diagram of a lane line semantic label map in one embodiment
  • FIG. 4 is a schematic flowchart of a method for determining a lane line detection model in one embodiment
  • FIG. 5 is a schematic flowchart of a lane line detection method in one embodiment
  • FIG. 6 is a schematic diagram of a lane line outline in one embodiment
  • FIG. 7 is a schematic diagram of a lane line outline in one embodiment
  • FIG. 8 is a schematic diagram of a lane line outline in one embodiment
  • FIG. 9 is a structural block diagram of a lane marking device in one embodiment
  • FIG. 10 is a structural block diagram of a device for determining a lane line detection model in one embodiment
  • FIG. 11 is a structural block diagram of a lane line detection device in one embodiment
  • Figure 13 is a diagram of the internal structure of a computer device in one embodiment.
  • the lane line marking method, the lane line detection model determination method and the lane line detection method provided by the present application can be applied to a vehicle intelligent driving system, and the vehicle intelligent driving system includes a vehicle controller and a collection device.
  • the collection device can be installed on the vehicle to collect road pictures or videos as the vehicle travels.
  • the vehicle controller can obtain the lane line scene graph from the road pictures or videos collected by the collection device, and label the lane line scene graph with the lane line to obtain the lane line semantic label map; it can also be further based on the lane line scene graph and the lane line semantics.
  • the label map is trained to obtain a lane line detection model; the trained lane line detection model can be further used for lane line detection.
  • a method for marking lane lines is provided, which is described by taking the method applied to a vehicle controller as an example, including the following steps S102 to S108 .
  • S102 Acquire position information of marked points on each lane line based on the lane line scene graph.
  • the lane line scene graph represents a road scene graph including lane lines, which can be obtained by taking pictures of the road ahead while the vehicle is running by a camera mounted on the vehicle.
  • FIG. 2 shows a schematic diagram of a lane line scene graph in an embodiment.
  • the lane line scene graph includes four lane lines, which are solid line, dashed line, dashed line, and solid line from left to right. Each lane line is shown in FIG. 2 . It presents the characteristics that the near end is thicker and the far end is thinner, that is, the lane line is thicker in the part close to the camera, and thinner in the part far from the camera.
  • the label point represents the point selected on the lane line in the lane line scene graph. For each lane line, at least two label points are selected.
  • the location information of the label point can specifically be that the label point is in the coordinate system established based on the lane line scene graph. coordinate value. In one embodiment, the lower left corner of the lane line scene graph shown in FIG.
  • the point to be drawn represents the point required to draw the lane line in the lane line scene graph, which can be understood as the point on the marked line to be drawn corresponding to the lane line.
  • the marked points on the lane lines can be directly used as the points to be drawn corresponding to the lane lines, or linear interpolation can be performed based on the adjacent marked points on the lane lines, and the marked points and the interpolation points can be used as the corresponding points of the lane lines. Point to be drawn.
  • the position information of the point to be drawn can be represented by (y i , x i ), and i represents the ith point to be drawn.
  • the thickness of the marked line at the to-be-drawn point can be determined according to the size of the Y-axis coordinate value (ie y i ) of the to-be-drawn point, so that the line thicknesses at the to-be-drawn points with different Y-axis coordinate values are different .
  • the thickness of each to-be-drawn point is inversely related to the Y-axis coordinate value, that is, the smaller the Y-axis coordinate value of the to-be-drawn point, the thicker the marked line at the corresponding position.
  • the lane line semantic label map represents a picture obtained by labeling the lane lines in the lane line scene graph.
  • FIG. 3 shows a schematic diagram of the lane line semantic label map in one embodiment, and the lane line semantic label map is shown in FIG. 2 .
  • the lane line semantic label map corresponding to the lane line scene graph includes four labeled lines, which correspond to the four lane lines in the lane line scene graph shown in Figure 2 respectively.
  • the characteristic of thinness is consistent with the characteristic that each lane line is thicker at the proximal end and thinner at the distal end shown in Figure 2.
  • the position information of the point to be drawn corresponding to the lane line is obtained based on the position information of the marked point on the lane line in the lane line scene graph, and the position information of the point to be drawn corresponding to the lane line is used to determine the position information of the point to be drawn corresponding to the lane line.
  • the drawn lane line annotation line can realize the thickness at the near end and the thin end at the far end, thereby reducing the adhesion of the lane line marked in the lane line semantic label map at the far end, and improving the accuracy of subsequent lane line detection.
  • the step of determining the position information of the points to be drawn corresponding to each lane line according to the position information of the marked points on each lane line may specifically include the following steps: Linear interpolation is performed on the position information of the adjacent marked points to obtain the position information of the interpolation points between the adjacent marked points; based on the marked points on each lane line and the position information of the interpolation points, the position of the to-be-drawn point corresponding to each lane line is determined information.
  • the marked points on the lane line can be marked manually.
  • there can be a distance between adjacent marked points and then interpolation points are inserted between all adjacent marked points through linear interpolation. All annotation points and all interpolation points are used as points to be drawn.
  • any two adjacent marked points can determine a straight line.
  • a first-order linear equation can be fitted based on the position information of the two adjacent marked points, and the phase can be calculated according to the first-order linear equation.
  • the position information of the interpolation point between two adjacent annotation points can be determined.
  • the position information of two adjacent label points (P 1 , P 2 ) are (y 1 , x 1 ) and (y 2 , x 2 ), respectively, then the fitting is based on the position information of P1 and P2
  • the intermediate value is used as the Y-axis coordinate value of the interpolation point, and then the Y-axis coordinate value of the interpolation point is used as a known variable, and is substituted into the above-mentioned linear equation, and the X-axis coordinate value of the interpolation point is calculated to obtain the position information of the interpolation point.
  • one or more than one intermediate value may be selected between y 1 and y 2 , so that the number of interpolation points between P1 and P2 may be one or more
  • linear interpolation is performed on the position information of the adjacent marked points on the lane line to obtain the position information of the interpolation points between the adjacent marked points, and then the lane is determined based on the position information of all the marked points and all the interpolation points.
  • the position information of the points to be drawn corresponding to the lines can reduce the workload of manual labeling, quickly obtain the points to be drawn required for drawing lane line labeling lines, improve labeling efficiency, reduce manual labeling errors, and improve labeling accuracy.
  • the location information includes a first coordinate value of a first coordinate axis direction in a coordinate system established based on a lane line scene graph, and the first coordinate axis direction represents a direction corresponding to the extending direction of the lane line;
  • the step of determining the thickness information of the marked line corresponding to each lane line at each to-be-drawn point can be specifically: based on the size of the first coordinate value of each to-be-drawn point corresponding to each lane line, determine The thickness of the marked line corresponding to each lane line at each point to be drawn, so that the thickness of the marked line decreases along the extension direction of the corresponding lane line.
  • the first coordinate axis direction is the Y axis direction
  • the first coordinate value is the Y axis coordinate value
  • the lane line extension direction specifically represents the lane line extending direction from the near end to the far end
  • the Y axis direction Corresponding to the extending direction of the lane line, it can be understood that as the lane line extends from the near end to the far end, the corresponding Y-axis coordinate value gradually increases.
  • the first coordinate value of the point to be drawn corresponding to a lane line is represented by y i
  • is a negative value, so that ⁇ i decreases as y i increases, so that the thickness of the marked line decreases along the extension direction of the corresponding lane line.
  • the size of the first coordinate value of each point to be drawn corresponding to each lane line is used to determine the thickness of the marked line corresponding to each lane line at each point to be drawn, so that the thickness of the marked line is along the corresponding lane.
  • the extension direction of the line decreases, thereby reducing the adhesion of the marked lane line at the far end and improving the accuracy of subsequent lane line detection.
  • the following steps may be further included: obtaining the category information of the marked points on each lane line based on the scene graph of the lane lines; obtaining the to-be-drawn points corresponding to each lane line according to the category information of the marked points on each lane line category information.
  • the position information of the point to be drawn corresponding to each lane line and the thickness information of the marked line at each point to be drawn draw the marked line corresponding to each lane line, and obtain the lane line semantic label map corresponding to the lane line scene graph, Specifically, it may be: according to the position information and category information of the points to be drawn corresponding to each lane line, and the thickness information of the marked line at each point to be drawn, draw the marked line corresponding to each lane line, and obtain the lane line corresponding to the scene map of the lane line. Semantic Label Graph.
  • the category information is used to indicate the lane line category to which the marked point belongs.
  • the category information can be color information, that is, different colors are used to indicate different lane line categories.
  • the lane line categories include solid lines and dashed lines.
  • the first color indicates the solid line category
  • the second color indicates the dashed line category, so that when drawing the marked line corresponding to each lane line, in addition to controlling the thickness of the drawn line according to the position information of the point to be drawn, You can also control the color of the drawn lines according to the lane line category.
  • the colors of the four marked lines are red, green, green, and red from left to right.
  • the corresponding lane line category is indicated by the category information of the marked point, and in subsequent lane line detection, not only the lane line position but also the lane line category can be detected, so that the lane line detection result is more comprehensive.
  • a method for determining a lane line detection model is provided, which is described by taking the method applied to a vehicle controller as an example, including the following steps S402 to S408 .
  • the sample lane line scene graph represents a road scene graph including lane lines, which can be specifically obtained by photographing the road ahead by a camera installed on the vehicle while the vehicle is driving.
  • the sample lane line scene graph is used as the training set for training the generative adversarial network model.
  • the lane line labeling method in any of the foregoing embodiments may be used to obtain a lane line semantic label map corresponding to the sample lane line scene graph.
  • the generative adversarial network model includes a generator and a discriminator.
  • the generator is used to generate a lane line semantic map from the input sample lane line scene graph, and the discriminator aims to distinguish the lane line semantic label map from the generated lane line semantic map.
  • the training goal of the generative adversarial network model is to minimize the difference between the lane line semantic map and the lane line semantic label map.
  • the generator and the discriminator are trained against each other based on the loss function, and finally the optimal parameters of the network model are obtained.
  • S408 Determine a lane line detection model according to the generator in the trained generative adversarial network model.
  • the generator in the trained generative adversarial network model can be used as a lane line detection model, and the lane line scene picture to be detected is input into the trained generator, and the corresponding lane line semantic map can be generated.
  • the real lane line scene map is used as the input to perform semantic segmentation and generate lanes.
  • the line semantic map is beneficial to remove complex backgrounds, and can generate lane lines in occluded areas, with better robustness and adaptability.
  • a lane line detection method is provided, and the method is applied to a vehicle controller as an example for description, including the following steps S502 to S506 .
  • the to-be-detected lane line scene graph represents a road scene graph including the to-be-detected lane line, and can be specifically obtained by photographing the road ahead by a camera installed on the vehicle while the vehicle is driving.
  • S504 perform lane line detection on the scene graph of the lane line to be detected, and obtain a lane line semantic map, where the lane line semantic map includes position information of each pixel.
  • the lane line detection model can be used to perform lane line detection on the scene graph of the lane line to be detected, and obtain the lane line semantic map.
  • the lane line detection model may be a generator in the trained generative adversarial network model, and the method for determining the lane line detection model may refer to the above embodiments, which will not be repeated here.
  • the pixel points represent the points contained in the detected lane lines, and the location information of the pixel points may specifically be the coordinate values of the pixel points in the coordinate system established based on the semantic map of the lane lines.
  • the lower left corner of the lane line semantic graph is taken as the origin of coordinates
  • the vertical direction is taken as the direction of the first coordinate axis (represented by the Y axis) (vertical upward is the positive direction)
  • the horizontal direction is taken as the second
  • the direction of the coordinate axis represented by the X axis
  • the position information of the pixel is represented by (y, x), where y represents the Y-axis coordinate value of the pixel, and x represents the pixel The X-coordinate value of the point.
  • the lane line scene map to be detected is input into the generator of the generative adversarial network model, and the corresponding lane line semantic map is generated, which can realize end-to-end detection, and eliminates the need for preprocessing and processing of the lane line scene map. Computation and other steps, the detection distance is longer, the manual parameter adjustment is less, and the robustness is better. Compared with the lane line detection method based on the semantic segmentation neural network based on the probability map, it can only detect a fixed number of lane lines and cannot generate lanes with occluded areas.
  • Line using generative adversarial network for lane line detection, can detect all lane lines in the lane line scene graph at the same time, and can generate lane lines in occluded areas, thereby improving the accuracy of lane line detection, and can adapt to most complex road scenes .
  • the step of determining the lane line in the scene graph of the lane line to be detected based on the position information of each pixel in the lane line semantic map may specifically include the following steps: based on the position of each pixel in the lane line semantic map information to obtain the lane contour of each connected area; for each lane contour, according to the contour point position information of the lane contour, determine whether the lane contour is a glued lane contour; when the lane contour is a glued lane contour, Segment the contour of the glued lane line according to the contour point position information of the contour of the glued lane line to obtain the contour of the divided lane line; The contour point of the lane line outline determines the lane line in the scene graph of the lane line to be detected.
  • the position information of the contour point is specifically the first coordinate extreme value position information of the contour point. Based on the first coordinate extreme value position information of the contour point, it is judged whether the contour of the lane line is the contour of the adhered lane line, and the contour of the adhered lane line is segmented. .
  • the target lane line profile represents the profile that is ultimately used for curve fitting to determine the lane line.
  • the same lane line may be disconnected, that is, there may be multiple lane outlines corresponding to the same lane line. , so the corresponding lane line contours also need to be merged, and the merging of the lane line contours will be described in detail in the following embodiments.
  • the lane line semantic map can also be preprocessed first, which includes the following steps: performing a closing operation on the lane line semantic map to fill the holes in the lane line semantic map, so as to facilitate subsequent finding of complete closures Lane outline; open the lane line semantic map after the closing operation to reduce the adhesion of two different lane lines caused by the previous closing operation; binarize the lane line semantic map after the opening operation to filter Remove some noise pixels.
  • the closed contour of each connected area is obtained, which is regarded as the initial contour of each lane line, and the circumference of each closed contour is calculated. Length, the closed contour whose contour perimeter is less than the perimeter threshold is eliminated to filter out the noise lane line, obtain the lane line contour after filtering out the noise, and then perform subsequent segmentation and merging processing.
  • the position information of the contour point includes a first coordinate value of a first coordinate axis direction in a coordinate system established based on the lane line semantic map, and the first coordinate axis direction represents a direction corresponding to the extending direction of the lane line;
  • the step of judging whether the contour of the lane line is a glued lane line contour according to the position information of the contour point of the contour may specifically include the following steps: taking any contour point of the lane line contour as the starting point, and obtaining the lanes in turn according to the contour direction of the lane line contour
  • the first coordinate value of each contour point of the line contour according to the first coordinate value of each contour point obtained in turn, obtain the first coordinate maximum value and the first coordinate minimum value; according to the number of the first coordinate maximum value and The number of the minimum value of the first coordinate to determine whether the contour of the lane line is a glued lane line contour.
  • the lane line contour is a glued lane line contour
  • the glued lane line contour can be understood as including at least two Contours corresponding to different lane lines.
  • the number of the first coordinate maximum value and the number of the first coordinate minimum value are both 1, it is determined that the lane outline is a non-adhesive lane outline, and the non-adhesive lane outline can be understood as the outline corresponding to the same lane line.
  • the direction of the first coordinate axis is the direction of the Y axis
  • the first coordinate value is the coordinate value of the Y axis.
  • the specific method for finding extreme points can be as follows: take any contour point of the lane outline as the starting point, and store the contour points in turn according to the counterclockwise order of the lane outline to obtain a set of contour points. It can be understood that the contour points The last contour point stored centrally is the right adjacent point of the first stored contour point; if the difference between the left and right adjacent N points of a contour point and the Y-axis coordinate value of the contour point is greater than a threshold, the The contour point is the minimum value point; if the difference between the Y-axis coordinate value of a contour point and its left and right adjacent N points is greater than a threshold, the contour point is the maximum value point.
  • N is a positive integer, which can be set according to actual needs, for example, it can be set to 2 or 3; the threshold is a positive number, which can be set according to actual needs, which is not limited here.
  • Fig. 6 and Fig. 7 respectively show schematic diagrams of lane line contours in one embodiment. It can be seen from the figures that in Fig. 6 there are two maximal values of Y-axis coordinates (corresponding to contour points YE_max1 and YE_max2 respectively) and two Y-axis coordinates The minimum value (corresponding to the contour points YE_min1 and YE_min2 respectively), that is, the lane line contour shown in FIG. 6 is the glued lane line contour, including contours corresponding to two different lane lines.
  • the outline of the lane line is the outline of the glued lane line
  • the outline of the glued lane line is segmented according to the position information of the outline points of the outline of the lane line
  • the step of obtaining the outline of the segmented lane line may specifically include the following steps:
  • the first coordinate maximum point corresponding to the first coordinate maximum value is the starting point, and the first coordinate maximum value points are sorted according to the outline direction of the outline of the adhesion lane line; the first coordinate maximum value point based on the adjacent serial number is The contour points between the large value points are obtained to obtain the contour of the split lane line.
  • the contour points are stored in sequence to obtain the contour point set, and the first coordinate maximum point closest to the first contour point stored in the contour point set is the starting point, and according to The counter-clockwise order of the lane line contour, sort each first coordinate maximum point.
  • the serial numbers of YE_max1, YE_max2, and YE_max3 are 1 and 2 in turn. , 3.
  • the outline points of the first segment of the segmented lane line outline include the outline points between YE_max1 and YE_max2
  • the contour points of the second segment of the segmented lane line contour include the contour points between YE_max2 and YE_max3
  • the contour points of the third segment of the segmented lane line contour include the contour points between YE_max3 and YE_max1.
  • the contour of the stuck lane line is segmented based on the first coordinate maximum point, and then curve fitting can be performed on the obtained contour points of each segmented lane line contour to avoid the influence of the lane line sticking on the curve fitting. , to improve the accuracy of the lane lines after fitting.
  • the detected lane line When the detected lane line is disconnected from the same lane line, it will also affect the accuracy of subsequent curve fitting, and may determine multiple disconnected lane lines belonging to the same lane line as multiple lanes line, so the disconnected lane lines need to be merged.
  • the step of determining the target lane line contour according to the non-adhesive lane line contour and the segmented lane line contour in the lane line contour may specifically include the following steps: for any two lane line contours in the segmented contour, According to the position information of the contour points of the two lane line contours, it is judged whether the two lane line contours correspond to the same lane line.
  • the divided contour includes the non-adhesive lane line contour and the divided lane line contour; when the two lane line contours correspond to the same lane When the line is divided, the contour points of the two lane line contours are merged to obtain the merged lane line contour; the target lane line contour is determined according to the lane line contour corresponding to different lane lines in the segmented contour and the merged lane line contour.
  • the step of judging whether the two lane contours correspond to the same lane line according to the position information of the contour points of the two lane contours may specifically include the following steps: obtaining the first lowest point and the first highest point of the first lane contour. Position information, obtain the position information of the second lowest point and the second highest point of the outline of the second lane line, the lowest point and the highest point are determined based on the first coordinate value of each contour point, and the first coordinate value of the first highest point is greater than or Equal to the first coordinate value of the second highest point; perform straight line fitting on the contour points of the first lane line contour and the second lane line contour respectively, obtain the first fitting line and the second fitting line, and calculate the first fitting line respectively.
  • the first lowest point refers to the lowest point of the contour of the first lane line, that is, the contour point corresponding to the minimum value of the Y-axis coordinate in the contour of the first lane line
  • the first highest point refers to the highest point of the contour of the first lane line, that is The contour point corresponding to the maximum value of the Y-axis coordinate in the contour of the first lane line.
  • the second lowest point refers to the lowest point of the contour of the second lane line, that is, the contour point corresponding to the minimum value of the Y-axis coordinate in the contour of the second lane line
  • the second highest point refers to the highest point of the contour of the second lane line, that is, the second The contour point corresponding to the maximum value of the Y-axis coordinate in the contour of the lane line.
  • the Y-axis coordinate value of the first highest point is greater than or equal to the Y-axis coordinate value of the second highest point.
  • the first distance refers to the distance between the contour of the first lane line and the contour of the second lane line
  • the second distance refers to the distance between the lowest point of the contour of the first lane line and the highest point of the contour of the second lane line.
  • FIG. 8 shows a schematic diagram of a lane line outline in an embodiment, the first lane line outline eline is located above the second lane line outline line, and the ps1 point and the pe1 point respectively represent the highest point of the first lane line outline eline and the lowest point, the ps2 point and the pe2 point represent the highest and lowest points of the second lane line contour line, respectively.
  • the distance between the first lane line contour eline and the second lane line contour line can be represented by the distance from one contour vertex in the two contours to the other contour fitting line, which can be specifically the minimum of the first lane line contour eline
  • the distance from point pe1 to the fitting line of the second lane line outline, or the distance from the highest point ps1 of the first lane line outline eline to the fitting line of the second lane line outline line, or the second lane line outline line The distance from the lowest point pe2 to the fitting line of the first lane line outline eline, or the distance from the highest point ps2 of the second lane line outline line to the fitting line of the first lane line outline eline.
  • the slope difference is less than the first threshold, the first distance is less than the second threshold, and the second distance is less than the third threshold, it is determined that the two lane outlines correspond to the same lane. That is, when the slopes of the two lane contours are close and the first distance and the second distance are small, the two lane contours are considered to belong to the same lane, and the contour points of the two lane contours can be merged to obtain a merged lane. line silhouette.
  • the first threshold, the second threshold, and the third threshold may all be set according to actual conditions.
  • the first distance and the second distance as parameters for judging whether the contours of two lane lines correspond to the same lane line, other parameters may also be used for judgment. For example, as shown in FIG.
  • the parameters may further include: the absolute value difference (represented by Ly) of the Y-axis coordinate values of the lowest point pe1 of the first lane outline eline and the lowest point pe2 of the second lane outline line (represented by Ly), the first The absolute value difference of the X-axis coordinate values between the lowest point pe1 of the lane outline eline and the lowest point pe2 of the second lane outline line (represented by Lx), the highest point ps1 of the first lane outline eline and the second lane outline line.
  • the absolute value difference of the Y-axis coordinate value of the highest point ps2 represented by Hy
  • the absolute value difference of the X-axis coordinate value of the highest point ps1 of the first lane line outline eline and the highest point ps2 of the second lane line outline line represented by Hx represents
  • the contours of the disconnected lanes can be effectively merged, thereby improving the accuracy of subsequent curve fitting.
  • the final target lane outline can be obtained, and the contour points of each target lane outline can be subjected to cubic curve fitting to obtain a curve. Fit the parameters, and display the fitted lane line as the final lane line detection result.
  • the detected lane outline is the target lane outline; if there is a glued lane outline in the detected lane outline, Then segment the contour of the glued lane line to obtain the contour of the divided lane line, and use the detected non-adhesive lane line contour in the contour of the lane line and the divided lane line contour obtained by the segmentation as the contour after segmentation.
  • the merged contour, the divided contour is the target lane line contour; if there are contours to be merged in the divided contour, merge the contours to be merged to obtain the merged lane line contour, and the divided contour does not need to be merged and the merged lane line contour obtained by merging is used as the target lane line contour.
  • the lane line semantic map also includes category information of each pixel, and the category information is used to indicate the lane category to which the pixel belongs; after obtaining the target lane line outline, it is also possible to The category information of the pixel points determines the lane line category corresponding to the contour of each target lane line.
  • each target lane line contour For each target lane line contour, count the number of pixels corresponding to each category information in the target lane line contour, and determine the lane line category indicated by the category information with the largest number of corresponding pixel points as the corresponding target lane line contour. the lane line category.
  • the category information is color information in the semantic map of lane lines.
  • the lane line categories include solid lines and dashed lines.
  • the solid line category is indicated by red
  • the dashed line category is indicated by green.
  • For each target lane line contour count the number of pixels corresponding to the solid line category and the dotted line category respectively. If the number of pixels corresponding to the solid line category is greater than the number of pixels corresponding to the dotted line category, then determine the target lane line contour corresponding to the lane line The category is a solid line. If the number of pixels corresponding to the dashed category is greater than the number of pixels corresponding to the solid category, the lane line category corresponding to the contour of the target lane line is determined to be a dashed line.
  • the corresponding lane line category is determined by the category information of each pixel point included in the target lane line outline, so that not only the position of the lane line in the lane line scene map can be detected, but also the lane line category can be identified, so that the The lane line detection results are more comprehensive.
  • FIGS. 1 , 4 and 5 are shown in sequence according to the arrows, these steps are not necessarily executed in the sequence shown by the arrows. Unless explicitly stated herein, the execution of these steps is not strictly limited to the order, and these steps may be performed in other orders. Moreover, at least a part of the steps in FIGS. 1 , 4 and 5 may include multiple steps or multiple stages. These steps or stages are not necessarily executed and completed at the same time, but may be executed at different times. These steps or stages The order of execution of the steps is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a portion of the steps or stages in the other steps.
  • a lane marking device 900 including: a marking point information acquisition module 910, a point information determination module 920 to be drawn, a marking line information determining module 930, and a marking line drawing module 940, of which:
  • the marked point information acquisition module 910 is configured to acquire the position information of marked points on each lane line based on the lane line scene graph.
  • the to-be-drawn point information determination module 920 is configured to determine the location information of the to-be-drawn points corresponding to each lane line according to the location information of the marked points on each lane line.
  • the marking line information determining module 930 is configured to determine the thickness information of the marking line corresponding to each lane line at each point to be drawn based on the position information of the point to be drawn corresponding to each lane line.
  • the labeling line drawing module 940 is used to draw the labeling line corresponding to each lane line according to the position information of the point to be drawn corresponding to each lane line and the thickness information of the labeling line at each point to be drawn, and obtain the corresponding scene graph of the lane line.
  • Lane line semantic label map is used to draw the labeling line corresponding to each lane line according to the position information of the point to be drawn corresponding to each lane line and the thickness information of the labeling line at each point to be drawn.
  • the to-be-drawn point information determination module 920 is specifically configured to: perform linear interpolation according to the position information of the adjacent marked points in the marked points on each lane line, and obtain the interpolation points between the adjacent marked points.
  • Position information Based on the marked points on each lane line and the position information of the interpolation points, the position information of the points to be drawn corresponding to each lane line is determined.
  • the location information includes a first coordinate value of a first coordinate axis direction in a coordinate system established based on a lane line scene graph, and the first coordinate axis direction represents a direction corresponding to the extending direction of the lane line; the marking line information determining module 930, which is specifically used for: determining the thickness of the marked line corresponding to each lane line at each to-be-drawn point based on the size of the first coordinate value of each point to be drawn corresponding to each lane line, so that the thickness of the marked line is along the corresponding line.
  • the lane line extends in decreasing direction.
  • the marked point information obtaining module 910 is further configured to obtain the category information of the marked points on each lane based on the lane line scene graph;
  • the to-be-drawn point information determining module 920 is further configured to The category information of the marked point is obtained to obtain the category information of the point to be drawn corresponding to each lane line;
  • the marking line drawing module 940 is specifically used to draw the position information and category information of the point to be drawn corresponding to each lane line and the marked line in each to-be-drawn line.
  • the thickness information at the point is used to draw the label line corresponding to each lane line, and obtain the lane line semantic label map corresponding to the lane line scene graph.
  • an apparatus 1000 for determining a lane line detection model including: a sample acquisition module 1010, a lane line marking module 1020, a model training module 1030, and a model determination module 1040, wherein:
  • the sample acquisition module 1010 is configured to acquire a sample lane line scene graph.
  • the lane line labeling module 1020 is configured to use the lane line labeling method in any of the foregoing embodiments to perform lane line labeling on the sample lane line scene graph, and obtain a lane line semantic label map corresponding to the sample lane line scene graph.
  • the model training module 1030 is configured to train the generative adversarial network model to be trained based on the sample lane line scene graph and the lane line semantic label graph, and obtain the trained generative adversarial network model.
  • the model determination module 1040 is configured to determine a lane line detection model according to the generator in the trained generative adversarial network model.
  • a lane line detection apparatus 1100 including: a to-be-detected picture acquisition module 1110, a lane line detection module 1120, and a lane line determination module 1130, wherein:
  • the to-be-detected picture acquisition module 1100 is configured to acquire a scene map of the lane line to be detected.
  • the lane line detection module 1120 is configured to use the lane line detection model determined by the method for determining the lane line detection model in any of the foregoing embodiments to perform lane line detection on the scene graph of the lane line to be detected, and obtain a lane line semantic map, lane line semantics
  • the map includes the position information of each pixel.
  • the lane line determination module 1130 is configured to determine the lane line in the scene map of the lane line to be detected based on the position information of each pixel in the lane line semantic map.
  • the lane line determination module 1130 includes: a contour acquisition unit, a first judgment unit, a segmentation unit, and a determination unit.
  • the contour obtaining unit is used to obtain the lane contour of each connected area based on the position information of each pixel in the semantic map of the lane; the first judgment unit is used for each lane contour, according to the position of the contour point of the lane contour. information, to determine whether the outline of the lane line is the outline of the glued lane line; the segmentation unit is used to segment the outline of the glued lane line according to the position information of the outline point of the outline of the glued lane line when the outline of the lane line is the outline of the glued lane line to obtain the divided lane.
  • the determining unit is used to determine the contour of the target lane according to the contour of the non-adhered lane and the contour of the divided lane in the contour of the lane, and determine the contour of the lane to be detected based on the contour points of the contour of each target lane. lane line.
  • the position information of the contour point includes a first coordinate value of a first coordinate axis direction in a coordinate system established based on a lane line semantic map, and the first coordinate axis direction represents a direction corresponding to the extending direction of the lane line; the first judgment The unit is specifically used to: take any contour point of the lane contour as the starting point, and sequentially obtain the first coordinate value of each contour point of the lane contour according to the contour direction of the lane contour; A coordinate value is obtained to obtain the first coordinate maximum value and the first coordinate minimum value; according to the number of the first coordinate maximum value and the number of the first coordinate minimum value, it is determined whether the outline of the lane line is a glued lane line outline.
  • the first determination unit is further configured to determine that the lane outline is a glued lane outline when at least one of the number of first coordinate maxima and the number of first coordinate minima is greater than 1.
  • the segmentation unit is specifically configured to: take the first coordinate maximum point corresponding to any first coordinate maximum value as the starting point, and according to the outline direction of the outline of the adhered lane line, divide the first coordinate maximum value point for each first coordinate maximum value.
  • the value points are sorted; based on the contour points between the first coordinate maximum points of the adjacent serial numbers, the contour of the segmented lane line is obtained.
  • the determining unit is specifically configured to: for any two lane line contours in the segmented contours, determine whether the two lane line contours correspond to the same lane line according to the position information of the contour points of the two lane line contours,
  • the segmented contour includes the non-adhesive lane outline and the segmented lane outline; when the two lane outlines correspond to the same lane, the contour points of the two lane outlines are merged to obtain the merged lane outline; Corresponding to the lane outlines of different lanes and the merged lane outlines, determine the target lane outline.
  • the second judging unit is specifically configured to: obtain the position information of the first lowest point and the first highest point of the outline of the first lane line, and obtain the second lowest point and the second highest point of the outline of the second lane line
  • the position information of , the lowest point and the highest point are determined based on the first coordinate value of each contour point, and the first coordinate value of the first highest point is greater than or equal to the first coordinate value of the second highest point;
  • the slope difference between the first slope and the second slope the first distance between the contour of the first lane line and the contour of the second lane line, and the second distance between the first lowest point and the second highest point, judge the two lane lines Whether the contour corresponds to the same lane line.
  • the first distance includes any one of the following items: the distance from the first lowest point to the second fitting line; the distance from the first highest point to the second fitting line; the second lowest point The distance to the first fitted line; the distance from the second highest point to the first fitted line.
  • the second judging unit is further configured to: when the slope difference is less than the first threshold, and the first distance is less than the second threshold, and the second distance is less than the third threshold, determine that the two lane outlines correspond to the same lane.
  • the lane line semantic map further includes category information of each pixel point, and the category information is used to indicate the lane line category to which the pixel point belongs.
  • the lane line determining module 1130 is further configured to: determine the lane line category corresponding to each target lane line contour according to the category information of each pixel point in each target lane line contour.
  • the lane line determination module 1130 is specifically configured to: for each target lane line contour, count the number of pixels corresponding to each category information in the target lane line contour, and assign the category information with the largest number of corresponding pixels to the category information indicated by the maximum number of pixels.
  • the lane line category is determined as the lane line category corresponding to the target lane line outline.
  • the determination of the lane detection model and the specific limitations of the lane detection device please refer to the definitions of the lane marking, the determination of the lane detection model and the lane detection method above, which will not be repeated here.
  • the above lane line marking, determination of the lane line detection model, and each module in the lane line detection device can be implemented in whole or in part by software, hardware and combinations thereof.
  • the above modules can be embedded in or independent of the processor in the computer device in the form of hardware, or stored in the memory in the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • a computer device is provided, and the computer device may be a server, and its internal structure diagram may be as shown in FIG. 12 .
  • the computer device includes a processor, memory, and a network interface connected by a system bus. Among them, the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium, an internal memory.
  • the nonvolatile storage medium stores an operating system, a computer program, and a database.
  • the internal memory provides an environment for the execution of the operating system and computer programs in the non-volatile storage medium.
  • the network interface of the computer device is used to communicate with an external terminal through a network connection.
  • the computer program is executed by the processor to realize a lane line marking, determination of a lane line detection model, and a lane line detection method.
  • a computer device in one embodiment, the computer device may be a terminal, and its internal structure diagram may be as shown in FIG. 13 .
  • the computer equipment includes a processor, memory, a communication interface, a display screen, and an input device connected by a system bus.
  • the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium, an internal memory.
  • the nonvolatile storage medium stores an operating system and a computer program.
  • the internal memory provides an environment for the execution of the operating system and computer programs in the non-volatile storage medium.
  • the communication interface of the computer device is used for wired or wireless communication with an external terminal, and the wireless communication can be realized by WIFI, operator network, NFC (Near Field Communication) or other technologies.
  • the computer program when executed by the processor, implements a lane line marking, determination of a lane line detection model, and a lane line detection method.
  • the display screen of the computer equipment may be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment may be a touch layer covered on the display screen, or a button, a trackball or a touchpad set on the shell of the computer equipment , or an external keyboard, trackpad, or mouse.
  • FIG. 12 or FIG. 13 is only a block diagram of a partial structure related to the solution of the present application, and does not constitute a limitation on the computer equipment to which the solution of the present application is applied.
  • a computer device may include more or fewer components than those shown in the figures, or combine certain components, or have a different arrangement of components.
  • a computer device including a memory and a processor, a computer program is stored in the memory, and the processor implements the steps in each of the foregoing method embodiments when the processor executes the computer program.
  • a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, implements the steps in each of the foregoing method embodiments.
  • a computer program product or computer program comprising computer instructions stored in a computer readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the steps in each of the foregoing method embodiments.
  • Non-volatile memory may include read-only memory (Read-Only Memory, ROM), magnetic tape, floppy disk, flash memory, or optical memory, and the like.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • the RAM may be in various forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM).

Abstract

本申请涉及一种车道线标注、检测模型确定、车道线检测方法及相关设备。通过生成对抗网络模型对待检测车道线场景图进行车道线检测,生成对应的车道线语义图,基于车道线语义图中各像素点的位置信息,确定待检测车道线场景图中的车道线,据此可以同时检测出车道线场景图中的所有车道线,并可以生成遮挡区域的车道线。训练生成对抗网络模型时使用的车道线语义标签图中,绘制的车道线标注线条在近端较粗在远端较细,据此可以减少标注的车道线在远端的粘连情况,提高车道线检测准确性。

Description

车道线标注、检测模型确定、车道线检测方法及相关设备 技术领域
本申请涉及智能驾驶技术领域,特别是涉及一种车道线标注、检测模型确定、车道线检测方法及相关设备。
背景技术
随着智能驾驶技术的发展,车道线检测已成为汽车辅助驾驶和无人驾驶的基础环节,准确地检测和识别车道线是车道偏离预警、车道保持、变道等功能的重要前提。目前的基于深度学习的车道线检测方法,检测出来的车道线容易出现在远端相互粘连的情况,车道线粘连会导致无法准确地进行曲线拟合,从而导致最终获得的车道线检测结果准确性较低。
发明内容
基于此,有必要针对上述技术问题,提供一种能够提高车道线检测准确性的车道线标注、检测模型确定、车道线检测方法及相关设备。
一种车道线标注方法,所述方法包括:
基于车道线场景图获取各车道线上的标注点的位置信息;
根据各所述车道线上的标注点的位置信息,确定各所述车道线对应的待绘制点的位置信息;
基于各所述车道线对应的待绘制点的位置信息,确定各所述车道线对应的标注线条在各所述待绘制点处的粗细信息;
根据各所述车道线对应的待绘制点的位置信息、以及标注线条在各所述待绘制点处的粗细信息,绘制各所述车道线对应的标注线条,获得所述车道线场景图对应的车道线语义标签图。
一种车道线标注装置,所述装置包括:
标注点信息获取模块,用于基于车道线场景图获取各车道线上的标注点的位置信息;
待绘制点信息确定模块,用于根据各所述车道线上的标注点的位置信息, 确定各所述车道线对应的待绘制点的位置信息;
标注线条信息确定模块,用于基于各所述车道线对应的待绘制点的位置信息,确定各所述车道线对应的标注线条在各所述待绘制点处的粗细信息;
标注线条绘制模块,用于根据各所述车道线对应的待绘制点的位置信息、以及标注线条在各所述待绘制点处的粗细信息,绘制各所述车道线对应的标注线条,获得所述车道线场景图对应的车道线语义标签图。
一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现以下步骤:
基于车道线场景图获取各车道线上的标注点的位置信息;
根据各所述车道线上的标注点的位置信息,确定各所述车道线对应的待绘制点的位置信息;
基于各所述车道线对应的待绘制点的位置信息,确定各所述车道线对应的标注线条在各所述待绘制点处的粗细信息;
根据各所述车道线对应的待绘制点的位置信息、以及标注线条在各所述待绘制点处的粗细信息,绘制各所述车道线对应的标注线条,获得所述车道线场景图对应的车道线语义标签图。
一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现以下步骤:
基于车道线场景图获取各车道线上的标注点的位置信息;
根据各所述车道线上的标注点的位置信息,确定各所述车道线对应的待绘制点的位置信息;
基于各所述车道线对应的待绘制点的位置信息,确定各所述车道线对应的标注线条在各所述待绘制点处的粗细信息;
根据各所述车道线对应的待绘制点的位置信息、以及标注线条在各所述待绘制点处的粗细信息,绘制各所述车道线对应的标注线条,获得所述车道线场景图对应的车道线语义标签图。
上述车道线标注方法、装置、计算机设备和存储介质,基于车道线场景图中车道线上的标注点的位置信息获得车道线对应的待绘制点的位置信息,通过车道线对应的待绘制点的位置信息确定车道线对应的标注线条在各待绘制点处 的粗细信息,使得不同位置的待绘制点处的线条粗细程度不同,根据车道线对应的待绘制点的位置信息、以及标注线条在各待绘制点处的粗细信息绘制的车道线标注线条可实现近端粗远端细,从而减少车道线语义标签图中标注的车道线在远端的粘连情况,提高后续车道线检测准确性。
一种车道线检测模型的确定方法,所述方法包括:
获取样本车道线场景图;
采用上述车道线标注方法对所述样本车道线场景图进行车道线标注,获得所述样本车道线场景图对应的车道线语义标签图;
基于所述样本车道线场景图以及所述车道线语义标签图,对待训练生成对抗网络模型进行训练,获得训练后的生成对抗网络模型;
根据所述训练后的生成对抗网络模型中的生成器,确定车道线检测模型。
一种车道线检测模型的确定装置,所述装置包括:
样本获取模块,用于获取样本车道线场景图;
车道线标注模块,用于采用上述车道线标注方法对所述样本车道线场景图进行车道线标注,获得所述样本车道线场景图对应的车道线语义标签图;
模型训练模块,用于基于所述样本车道线场景图以及所述车道线语义标签图,对待训练生成对抗网络模型进行训练,获得训练后的生成对抗网络模型;
模型确定模块,用于根据所述训练后的生成对抗网络模型中的生成器,确定车道线检测模型。
一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现以下步骤:
获取样本车道线场景图;
采用上述车道线标注方法对所述样本车道线场景图进行车道线标注,获得所述样本车道线场景图对应的车道线语义标签图;
基于所述样本车道线场景图以及所述车道线语义标签图,对待训练生成对抗网络模型进行训练,获得训练后的生成对抗网络模型;
根据所述训练后的生成对抗网络模型中的生成器,确定车道线检测模型。
一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现以下步骤:
获取样本车道线场景图;
采用上述车道线标注方法对所述样本车道线场景图进行车道线标注,获得所述样本车道线场景图对应的车道线语义标签图;
基于所述样本车道线场景图以及所述车道线语义标签图,对待训练生成对抗网络模型进行训练,获得训练后的生成对抗网络模型;
根据所述训练后的生成对抗网络模型中的生成器,确定车道线检测模型。
上述车道线检测模型的确定方法、装置、计算机设备和存储介质,与一般的生成对抗网络将语义图作为输入来生成自然场景图不同,通过逆向使用生成对抗网络,将真实的车道线场景图作为输入进行语义分割,生成车道线语义图,有利于去除复杂背景,且可以生成遮挡区域的车道线,鲁棒性更好,适应性更强。
一种车道线检测方法,所述方法包括:
获取待检测车道线场景图;
采用上述车道线检测模型的确定方法确定的车道线检测模型,对所述待检测车道线场景图进行车道线检测,获得车道线语义图,所述车道线语义图中包括各像素点的位置信息;
基于所述车道线语义图中各像素点的位置信息,确定所述待检测车道线场景图中的车道线。
一种车道线检测装置,所述装置包括:
待检测图片获取模块,用于获取待检测车道线场景图;
车道线检测模块,用于采用上述车道线检测模型的确定方法确定的车道线检测模型,对所述待检测车道线场景图进行车道线检测,获得车道线语义图,所述车道线语义图包括各像素点的位置信息;
车道线确定模块,用于基于所述车道线语义图中各像素点的位置信息,确定所述待检测车道线场景图中的车道线。
一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现以下步骤:
获取待检测车道线场景图;
采用上述车道线检测模型的确定方法确定的车道线检测模型,对所述待检 测车道线场景图进行车道线检测,获得车道线语义图,所述车道线语义图中包括各像素点的位置信息;
基于所述车道线语义图中各像素点的位置信息,确定所述待检测车道线场景图中的车道线。
一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现以下步骤:
获取待检测车道线场景图;
采用上述车道线检测模型的确定方法确定的车道线检测模型,对所述待检测车道线场景图进行车道线检测,获得车道线语义图,所述车道线语义图中包括各像素点的位置信息;
基于所述车道线语义图中各像素点的位置信息,确定所述待检测车道线场景图中的车道线。
上述车道线检测方法、装置、计算机设备和存储介质,通过生成对抗网络模型中的生成器,生成待检测车道线场景图对应的车道线语义图,可实现端到端检测,省去对车道线场景图进行预处理和计算等步骤,检测距离更远、手动调参量更少且鲁棒性更好,相对于基于概率图的语义分割神经网络的车道线检测方法只能检测固定数量车道线且无法生成遮挡区域的车道线,使用生成对抗网络进行车道线检测,可以同时检测出车道线场景图中的所有车道线,并可以生成遮挡区域的车道线,从而提高车道线检测准确性,能够适应大多数复杂的道路场景。
附图说明
图1为一个实施例中车道线标注方法的流程示意图;
图2为一个实施例中车道线场景图的示意图;
图3为一个实施例中车道线语义标签图的示意图;
图4为一个实施例中车道线检测模型的确定方法的流程示意图;
图5为一个实施例中车道线检测方法的流程示意图;
图6为一个实施例中车道线轮廓的示意图;
图7为一个实施例中车道线轮廓的示意图;
图8为一个实施例中车道线轮廓的示意图;
图9为一个实施例中车道线标注装置的结构框图;
图10为一个实施例中车道线检测模型的确定装置的结构框图;
图11为一个实施例中车道线检测装置的结构框图;
图12为一个实施例中计算机设备的内部结构图;
图13为一个实施例中计算机设备的内部结构图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请提供的车道线标注方法、车道线检测模型的确定方法以及车道线检测方法,可以应用于车辆智能驾驶系统中,车辆智能驾驶系统包括车辆控制器和采集设备。采集设备可以安装于车辆上,随着车辆的行驶采集道路图片或视频。车辆控制器可以从采集设备采集的道路图片或视频中获取车道线场景图,并对车道线场景图进行车道线标注,获得车道线语义标签图;还可以进一步基于车道线场景图和车道线语义标签图,训练获得车道线检测模型;还可以进一步采用训练后的车道线检测模型进行车道线检测。
在一个实施例中,如图1所示,提供了一种车道线标注方法,以该方法应用于车辆控制器为例进行说明,包括以下步骤S102至步骤S108。
S102,基于车道线场景图获取各车道线上的标注点的位置信息。
车道线场景图表示包含车道线的道路场景图,具体可以通过安装于车辆上的摄像头在车辆行驶的过程中针对前方道路拍摄获得。图2示出了一个实施例中车道线场景图的示意图,该车道线场景图中包括四条车道线,从左往右依次为实线、虚线、虚线、实线,各车道线在图2中呈现近端较粗远端较细的特点,即车道线在靠近摄像头的部分较粗,在远离摄像头的部分较细。
标注点表示在车道线场景图中的车道线上选取的点,对于每一条车道线至少选取两个标注点,标注点的位置信息具体可以是标注点在基于车道线场景图建立的坐标系下的坐标值。在一个实施例中,以图2所示的车道线场景图的左 下角点为坐标原点O,以竖直方向为第一坐标轴(用Y轴表示)方向(竖直向上为正向),以水平方向为第二坐标轴(用X轴表示)方向(水平向右为正向),建立坐标系,标注点的位置信息用(y,x)表示,其中,y表示标注点的Y轴坐标值,x表示标注点的X轴坐标值。
S104,根据各车道线上的标注点的位置信息,确定各车道线对应的待绘制点的位置信息。
待绘制点表示绘制车道线场景图中的车道线所需的点,可以理解为车道线对应的待绘制的标注线条上的点,待绘制点的位置信息具体可以是待绘制点在基于车道线场景图建立的坐标系下的坐标值。需要说明的是,可以直接将车道线上的标注点作为车道线对应的待绘制点,也可以基于车道线上的相邻标注点进行线性插值,将标注点和插值点一起作为车道线对应的待绘制点。
S106,基于各车道线对应的待绘制点的位置信息,确定各车道线对应的标注线条在各待绘制点处的粗细信息。
参考前述实施例中的坐标系,待绘制点的位置信息可以用(y i,x i)表示,i表示第i个待绘制点。具体地,可以根据待绘制点的Y轴坐标值(即y i)的大小来确定标注线条在该待绘制点处的粗细大小,使得不同Y轴坐标值的待绘制点处的线条粗细程度不同。在一个实施例中,各待绘制点处的粗细大小与Y轴坐标值大小反相关,即待绘制点的Y轴坐标值越小,对应位置处的标注线条越粗。
S108,根据各车道线对应的待绘制点的位置信息、以及标注线条在各待绘制点处的粗细信息,绘制各车道线对应的标注线条,获得车道线场景图对应的车道线语义标签图。
车道线语义标签图表示对车道线场景图中的车道线进行标注获得的图片,图3示出了一个实施例中车道线语义标签图的示意图,该车道线语义标签图为图2所示的车道线场景图对应的车道线语义标签图,其中包括四条标注线条,分别对应图2所示的车道线场景图中的四条车道线,各标注线条在图3中呈现近端较粗远端较细的特点,与各车道线在图2中呈现的近端较粗远端较细的特点相符。
上述车道线标注方法中,基于车道线场景图中车道线上的标注点的位置信息获得车道线对应的待绘制点的位置信息,通过车道线对应的待绘制点的位置 信息确定车道线对应的标注线条在各待绘制点处的粗细信息,使得不同位置的待绘制点处的线条粗细程度不同,根据车道线对应的待绘制点的位置信息、以及标注线条在各待绘制点处的粗细信息绘制的车道线标注线条可实现近端粗远端细,从而减少车道线语义标签图中标注的车道线在远端的粘连情况,提高后续车道线检测准确性。
在一个实施例中,根据各车道线上的标注点的位置信息,确定各车道线对应的待绘制点的位置信息的步骤,具体可以包括以下步骤:根据各车道线上的标注点中的相邻标注点的位置信息进行线性插值,获得相邻标注点之间的插值点的位置信息;基于各车道线上的标注点以及插值点的位置信息,确定各车道线对应的待绘制点的位置信息。
车道线上的标注点可以由人工进行标注,为了减轻人工标注的工作量,相邻标注点之间可以间隔一段距离,然后通过线性插值在所有相邻的标注点之间插入插值点,最后将所有标注点和所有插值点作为待绘制点。
对于同一条车道线上的各标注点,任意相邻的两个标注点可以确定一条直线,具体可以基于相邻的两个标注点的位置信息拟合得到一次线性方程,根据一次线性方程计算相邻的两个标注点之间的插值点的位置信息。
具体而言,例如相邻的两个标注点(P 1、P 2)的位置信息分别为(y 1,x 1)和(y 2,x 2),则由P1、P2的位置信息拟合得到的一次线性方程为:x=ky+b,其中,k=(x 2-x 1)/(y 2-y 1),b=x 1-ky 1,在y 1到y 2之间选取中间值,作为插值点的Y轴坐标值,然后将插值点的Y轴坐标值作为已知变量,代入到上述一次线性方程,计算获得插值点的X轴坐标值,从而获得插值点的位置信息。可以理解,在y 1到y 2之间可以选取一个或者多于一个的中间值,从而P1与P2之间的插值点数量可以是一个也可以是多于一个。
本实施例中,通过车道线上的相邻标注点的位置信息进行线性插值,获得相邻标注点之间的插值点的位置信息,再基于所有标注点和所有插值点的位置信息,确定车道线对应的待绘制点的位置信息,据此可以减轻人工标注的工作量,快速获得绘制车道线标注线条所需的待绘制点,提高标注效率,同时还可以减少人工标注错误,提高标注准确性。
在一个实施例中,位置信息包括基于车道线场景图建立的坐标系下第一坐 标轴方向的第一坐标值,第一坐标轴方向表示与车道线延伸方向对应的方向;基于各车道线对应的待绘制点的位置信息,确定各车道线对应的标注线条在各待绘制点处的粗细信息的步骤,具体可以是:基于各车道线对应的各待绘制点的第一坐标值大小,确定各车道线对应的标注线条在各待绘制点处的粗细大小,使得标注线条的粗细大小沿对应的车道线延伸方向递减。
参考前述实施例中的坐标系,第一坐标轴方向为Y轴方向,第一坐标值为Y轴坐标值,车道线延伸方向具体表示车道线从近端到远端的延伸方向,Y轴方向与车道线延伸方向对应,可以理解为随着车道线从近端向远端延伸,对应的Y轴坐标值逐渐增大。
具体而言,一车道线对应的待绘制点的第一坐标值用y i表示,该车道线对应的标注线条在各待绘制点处的粗细大小的确定方式如下:ε i=αy i+β,其中,ε i表示标注线条在y i对应的待绘制点位置处的粗细大小,α和β表示调节因子,可以根据实际情况进行设置。按照前述实施例中的坐标系,α为一负值,从而ε i随着y i增大而减小,使得标注线条的粗细大小沿对应的车道线延伸方向递减。
本实施例中,通过各车道线对应的各待绘制点的第一坐标值大小,确定各车道线对应的标注线条在各待绘制点处的粗细大小,使得标注线条的粗细大小沿对应的车道线延伸方向递减,从而减少标注的车道线在远端的粘连情况,提高后续车道线检测准确性。
在一个实施例中,还可以包括以下步骤:基于车道线场景图获取各车道线上的标注点的类别信息;根据各车道线上的标注点的类别信息,获得各车道线对应的待绘制点的类别信息。根据各车道线对应的待绘制点的位置信息、以及标注线条在各待绘制点处的粗细信息,绘制各车道线对应的标注线条,获得车道线场景图对应的车道线语义标签图的步骤,具体可以是:根据各车道线对应的待绘制点的位置信息、类别信息以及标注线条在各待绘制点处的粗细信息,绘制各车道线对应的标注线条,获得车道线场景图对应的车道线语义标签图。
类别信息用于指示标注点所属的车道线类别,类别信息具体可以是颜色信息,即采用不同的颜色指示不同的车道线类别,如图2所示,车道线类别包括实线和虚线,可以用第一颜色(例如红色)指示实线类别,用第二颜色(例如绿色)指示虚线类别,从而在绘制各车道线对应的标注线条时,除了根据待绘 制点的位置信息控制绘制线条的粗细,还可以根据车道线类别控制绘制线条的颜色,如图3所示,四条标注线条的颜色从左至右依次为红色、绿色、绿色、红色。
本实施例中,通过标注点的类别信息指示对应的车道线类别,后续进行车道线检测时,不仅能够检测出车道线位置,还可以识别出车道线类别,使车道线检测结果更为全面。
在一个实施例中,如图4所示,提供了一种车道线检测模型的确定方法,以该方法应用于车辆控制器为例进行说明,包括以下步骤S402至步骤S408。
S402,获取样本车道线场景图。
样本车道线场景图表示包含车道线的道路场景图,具体可以通过安装于车辆上的摄像头在车辆行驶的过程中针对前方道路拍摄获得。样本车道线场景图作为训练集,用于训练生成对抗网络模型。
S404,获取样本车道线场景图对应的车道线语义标签图。
可以采用上述任一实施例中的车道线标注方法,获得样本车道线场景图对应的车道线语义标签图。
S406,基于样本车道线场景图以及车道线语义标签图,对待训练生成对抗网络模型进行训练,获得训练后的生成对抗网络模型。
生成对抗网络模型包括生成器和判别器,生成器用于将输入的样本车道线场景图生成车道线语义图,判别器旨在将车道线语义标签图与生成的车道线语义图区分开来。生成对抗网络模型的训练目标是使得车道线语义图与车道线语义标签图的区别最小化,生成器和判别器分别基于损失函数进行对抗训练,最终得到网络模型的最优参数。
S408,根据训练后的生成对抗网络模型中的生成器,确定车道线检测模型。
训练后的生成对抗网络模型中的生成器可以作为车道线检测模型,将待检测的车道线场景图片输入训练后的生成器中,可以生成对应的车道线语义图。
上述车道线检测模型的确定方法中,与一般的生成对抗网络将语义图作为输入来生成自然场景图不同,通过逆向使用生成对抗网络,将真实的车道线场景图作为输入进行语义分割,生成车道线语义图,有利于去除复杂背景,且可以生成遮挡区域的车道线,鲁棒性更好,适应性更强。
在一个实施例中,如图5所示,提供了一种车道线检测方法,以该方法应用于车辆控制器为例进行说明,包括以下步骤S502至步骤S506。
S502,获取待检测车道线场景图。
待检测车道线场景图表示包含待检测车道线的道路场景图,具体可以通过安装于车辆上的摄像头在车辆行驶的过程中针对前方道路拍摄获得。
S504,对待检测车道线场景图进行车道线检测,获得车道线语义图,车道线语义图中包括各像素点的位置信息。
可以采用车道线检测模型,对待检测车道线场景图进行车道线检测,获得车道线语义图。车道线检测模型具体可以是训练后的生成对抗网络模型中的生成器,车道线检测模型的确定方法可以参见上文实施例,此处不再赘述。
S506,基于车道线语义图中各像素点的位置信息,确定待检测车道线场景图中的车道线。
像素点表示检测出的车道线包含的点,像素点的位置信息具体可以是像素点在基于车道线语义图建立的坐标系下的坐标值。在一个实施例中,以车道线语义图的左下角点为坐标原点,以竖直方向为第一坐标轴(用Y轴表示)方向(竖直向上为正向),以水平方向为第二坐标轴(用X轴表示)方向(水平向右为正向),建立坐标系,像素点的位置信息用(y,x)表示,其中,y表示像素点的Y轴坐标值,x表示像素点的X轴坐标值。
上述车道线检测方法中,将待检测车道线场景图输入生成对抗网络模型的生成器中,生成对应的车道线语义图,可实现端到端检测,省去对车道线场景图进行预处理和计算等步骤,检测距离更远、手动调参量更少且鲁棒性更好,相对于基于概率图的语义分割神经网络的车道线检测方法只能检测固定数量车道线且无法生成遮挡区域的车道线,使用生成对抗网络进行车道线检测,可以同时检测出车道线场景图中的所有车道线,并可以生成遮挡区域的车道线,从而提高车道线检测准确性,能够适应大多数复杂的道路场景。
在一个实施例中,基于车道线语义图中各像素点的位置信息,确定待检测车道线场景图中的车道线的步骤,具体可以包括以下步骤:基于车道线语义图中各像素点的位置信息,获得各连通区域的车道线轮廓;对于每一车道线轮廓,根据车道线轮廓的轮廓点位置信息,判断车道线轮廓是否为粘连车道线轮廓; 当车道线轮廓为粘连车道线轮廓时,根据粘连车道线轮廓的轮廓点位置信息对粘连车道线轮廓进行分割,获得分割车道线轮廓;根据车道线轮廓中的非粘连车道线轮廓和分割车道线轮廓,确定目标车道线轮廓,基于各目标车道线轮廓的轮廓点,确定待检测车道线场景图中的车道线。
其中,轮廓点位置信息具体是轮廓点的第一坐标极值位置信息,通过轮廓点的第一坐标极值位置信息,判断车道线轮廓是否为粘连车道线轮廓,并对粘连车道线轮廓进行分割。目标车道线轮廓表示最终用于曲线拟合来确定车道线的轮廓。此外,基于车道线语义图获得的车道线轮廓,或分割后得到的车道线轮廓中,还可能存在同一条车道线断开的情况,即可能存在多个车道线轮廓对应同一条车道线的情况,因此还需对相应的车道线轮廓进行合并,车道线轮廓的合并将在后文实施例中详细描述。
检测得到车道线语义图后,还可以先对车道线语义图进行预处理,具体包括以下步骤:对车道线语义图进行闭运算,以填充车道线语义图中的孔洞,便于后续查找完整的封闭车道线轮廓;对闭运算后的车道线语义图进行开运算,以减少前面闭运算导致的两条不同的车道线粘连的情况;对开运算后的车道线语义图进行二值化,以滤除部分噪声像素点。
对车道线语义图进行预处理后,基于预处理后的车道线语义图中像素点的位置分布情况,获得各连通区域的封闭轮廓,视为初始的各车道线轮廓,计算各封闭轮廓的周长,将轮廓周长小于周长阈值的封闭轮廓进行剔除,以滤除噪声车道线,获得滤除噪声后的车道线轮廓,再进行后续分割和合并处理。
在一个实施例中,轮廓点位置信息包括基于车道线语义图建立的坐标系下第一坐标轴方向的第一坐标值,第一坐标轴方向表示与车道线延伸方向对应的方向;根据车道线轮廓的轮廓点位置信息,判断车道线轮廓是否为粘连车道线轮廓的步骤,具体可以包括以下步骤:以车道线轮廓的任一轮廓点为起始点,按照车道线轮廓的轮廓方向,依次获取车道线轮廓的各轮廓点的第一坐标值;根据依次获得的各轮廓点的第一坐标值,获得第一坐标极大值和第一坐标极小值;根据第一坐标极大值的数量以及第一坐标极小值的数量,判断车道线轮廓是否为粘连车道线轮廓。
具体地,当第一坐标极大值的数量以及第一坐标极小值的数量中,至少有 一个数量大于1,判定车道线轮廓为粘连车道线轮廓,粘连车道线轮廓可以理解为包含至少两条不同的车道线对应的轮廓。当第一坐标极大值的数量以及第一坐标极小值的数量均为1,判定车道线轮廓为非粘连车道线轮廓,非粘连车道线轮廓可以理解为同一条车道线对应的轮廓。
第一坐标轴方向为Y轴方向,第一坐标值为Y轴坐标值,以车道线轮廓的任一轮廓点为起始点,按照车道线轮廓的轮廓方向,依次获取车道线轮廓的各轮廓点的Y轴坐标值,根据依次获得的各轮廓点的Y轴坐标值,获得Y轴坐标极大值和Y轴坐标极小值,Y轴坐标极大值和Y轴坐标极小值对应的轮廓点分别称为极大值点和极小值点。寻找极值点的方法具体可以如下:以车道线轮廓的任一轮廓点为起始点,按照车道线轮廓的逆时针方向顺序,依次存储将各轮廓点,获得轮廓点集,可以理解,轮廓点集中存储的最后一个轮廓点是存储的第一个轮廓点的右相邻点;若某一轮廓点左右相邻N个点与该轮廓点的Y轴坐标值之差均大于一阈值,则该轮廓点为极小值点;若某一轮廓点与之左右相邻N个点的Y轴坐标值之差均大于一阈值,则该轮廓点为极大值点。其中,N为正整数,可以结合实际需求进行设置,例如可以设为2或3;阈值为正数,可以结合实际需求进行设置,此处不做限定。
需要说明的是,寻找到的极值点中,可能存在极大值点和极小值点的数量不完全匹配的情况,如果找到多个相距较近(相距较近的意思是指轮廓点存储索引相距较近)的同一性质的极值点(极大值点或极小值点),则取其中排序靠前的一个极值点,滤掉多余误检极值点。上述数量不匹配的情况发生概率很小,下文以极大值和极小值一一对应的情况为例来进行说明。
图6和图7分别示出了一个实施例中的车道线轮廓示意图,由图可见,图6中有两个Y轴坐标极大值(分别对应轮廓点YE_max1和YE_max2)和两个Y轴坐标极小值(分别对应轮廓点YE_min1和YE_min2),即图6所示的车道线轮廓为粘连车道线轮廓,包含两条不同车道线对应的轮廓。图7中有三个Y轴坐标极大值(分别对应轮廓点YE_max1、YE_max2和YE_max3)和三个Y轴坐标极小值(分别对应轮廓点YE_min1、YE_min2和YE_min3),即图7所示的车道线轮廓为粘连车道线轮廓,包含三条不同车道线对应的轮廓。
当检测出的车道线存在粘连情况时,会对后续曲线拟合的准确性产生影响, 而且可能会将多条粘连的车道线判定为同一条车道线,因此需要对粘连车道线进行分割。
在一个实施例中,当车道线轮廓为粘连车道线轮廓时,根据车道线轮廓的轮廓点位置信息对粘连车道线轮廓进行分割,获得分割车道线轮廓的步骤,具体可以包括以下步骤:以任一第一坐标极大值对应的第一坐标极大值点为起始点,按照粘连车道线轮廓的轮廓方向,对各第一坐标极大值点进行排序;基于相邻序号的第一坐标极大值点之间的轮廓点,获得分割车道线轮廓。
具体地,按照车道线轮廓的逆时针方向顺序,依次存储各轮廓点,获得轮廓点集,以离轮廓点集中存储的第一个轮廓点最近的第一坐标极大值点为起始点,按照车道线轮廓的逆时针方向顺序,对各第一坐标极大值点进行排序。以图7所示的粘连车道线轮廓为例,假设以YE_max1为起始点,沿轮廓逆时针方向,对各Y轴极大值点进行排序,则YE_max1、YE_max2、YE_max3的序号依次为1、2、3,以YE_max1、YE_max2、YE_max3作为分割点,将粘连车道线轮廓分为3段,获得3段分割车道线轮廓,第一段分割车道线轮廓的轮廓点包括YE_max1与YE_max2之间的轮廓点,第二段分割车道线轮廓的轮廓点包括YE_max2与YE_max3之间的轮廓点,第三段分割车道线轮廓的轮廓点包括YE_max3与YE_max1之间的轮廓点。
上述实施例中,基于第一坐标极大值点对粘连车道线轮廓进行分割,后续可以对获得的各分割车道线轮廓的轮廓点分别进行曲线拟合,避免车道线粘连对于曲线拟合的影响,提高拟合后车道线的准确性。
当检测出的车道线存在同一条车道线断开的情况时,也会对后续曲线拟合的准确性产生影响,可能会将属于同一条车道线的多条断开车道线判定为多条车道线,因此需要对断开车道线进行合并。
在一个实施例中,根据车道线轮廓中的非粘连车道线轮廓和分割车道线轮廓,确定目标车道线轮廓的步骤,具体可以包括以下步骤:对于分割后轮廓中的任意两个车道线轮廓,根据两个车道线轮廓的轮廓点位置信息,判断两个车道线轮廓是否对应同一条车道线,分割后轮廓包括非粘连车道线轮廓和分割车道线轮廓;当两个车道线轮廓对应同一条车道线时,将两个车道线轮廓的轮廓点合并,获得合并车道线轮廓;根据分割后轮廓中对应不同条车道线的车道线 轮廓和合并车道线轮廓,确定目标车道线轮廓。
根据两个车道线轮廓的轮廓点位置信息,判断两个车道线轮廓是否对应同一条车道线的步骤,具体可以包括以下步骤:获取第一车道线轮廓的第一最低点和第一最高点的位置信息,获取第二车道线轮廓的第二最低点和第二最高点的位置信息,最低点和最高点基于各轮廓点的第一坐标值确定,第一最高点的第一坐标值大于或等于第二最高点的第一坐标值;分别对第一车道线轮廓和第二车道线轮廓的轮廓点进行直线拟合,获得第一拟合线和第二拟合线,分别计算第一拟合线和第二拟合线的斜率,获得第一斜率和第二斜率;根据第一斜率与第二斜率的斜率差值、第一车道线轮廓与第二车道线轮廓的第一距离、以及第一最低点与第二最高点的第二距离,判断两个车道线轮廓是否对应同一条车道线。
其中,第一最低点是指第一车道线轮廓的最低点,即第一车道线轮廓中Y轴坐标最小值对应的轮廓点,第一最高点是指第一车道线轮廓的最高点,即第一车道线轮廓中Y轴坐标最大值对应的轮廓点。第二最低点是指第二车道线轮廓的最低点,即第二车道线轮廓中Y轴坐标最小值对应的轮廓点,第二最高点是指第二车道线轮廓的最高点,即第二车道线轮廓中Y轴坐标最大值对应的轮廓点。第一最高点的Y轴坐标值大于或等于第二最高点的Y轴坐标值。第一距离是指第一车道线轮廓与第二车道线轮廓之间的距离,第二距离表示第一车道线轮廓的最低点与第二车道线轮廓的最高点之间的距离。
举例来说,图8示出了一个实施例中的车道线轮廓示意图,第一车道线轮廓eline位于第二车道线轮廓line上方,ps1点和pe1点分别表示第一车道线轮廓eline的最高点和最低点,ps2点和pe2点分别表示第二车道线轮廓line的最高点和最低点。
第一车道线轮廓eline与第二车道线轮廓line之间的距离,可以用两轮廓中的一个轮廓顶点到另一个轮廓拟合线的距离来表示,具体可以是第一车道线轮廓eline的最低点pe1到第二车道线轮廓line的拟合线的距离,或是第一车道线轮廓eline的最高点ps1到第二车道线轮廓line的拟合线的距离,或是第二车道线轮廓line的最低点pe2到第一车道线轮廓eline的拟合线的距离,或是第二车道线轮廓line的最高点ps2到第一车道线轮廓eline的拟合线的距离。
在一个实施例中,当斜率差值小于第一阈值、且第一距离小于第二阈值、且第二距离小于第三阈值时,判定两个车道线轮廓对应同一条车道线。即当两个车道线轮廓的斜率接近、第一距离和第二距离较小时,认为该两个车道线轮廓属于同一条车道线,可将该两个车道线轮廓的轮廓点合并,获得合并车道线轮廓。其中,第一阈值、第二阈值、第三阈值均可以根据实际情况进行设置。
需要说明的是,除了采用上述斜率差值、第一距离和第二距离作为判断两车道线轮廓是否对应同一条车道线的参数时,还可以采用其他参数来进行判断。例如,如图8所示,参数还可以包括:第一车道线轮廓eline的最低点pe1与第二车道线轮廓line的最低点pe2的Y轴坐标值绝对值差(用Ly表示)、第一车道线轮廓eline的最低点pe1与第二车道线轮廓line的最低点pe2的X轴坐标值绝对值差(用Lx表示)、第一车道线轮廓eline的最高点ps1与第二车道线轮廓line的最高点ps2的Y轴坐标值绝对值差(用Hy表示)、第一车道线轮廓eline的最高点ps1与第二车道线轮廓line的最高点ps2的X轴坐标值绝对值差(用Hx表示),当Ly和Hy较大、Lx和Hx较小时,认为第一车道线轮廓eline和第二车道线轮廓line很可能对应同一条车道线。
上述实施例中,通过对属于同一条车道线的多个车道线轮廓进行合并,可有效合并断开的车道线轮廓,提高后续曲线拟合准确率。
对检测出的车道线轮廓中需要分割或合并的车道线轮廓进行上述分割以及合并处理后,获得最终的目标车道线轮廓,可以对各目标车道线轮廓的轮廓点进行三次曲线拟合,得到曲线拟合参数,并将拟合后的车道线显示出来,作为最终的车道线检测结果。
需要说明的是,若检测出的车道线轮廓中不存在需要分割或合并的轮廓,则检测出的车道线轮廓即为目标车道线轮廓;若检测出的车道线轮廓中存在粘连车道线轮廓,则对粘连车道线轮廓进行分割,获得分割车道线轮廓,将检测出的车道线轮廓中的非粘连车道线轮廓和分割得到的分割车道线轮廓作为分割后轮廓,若分割后轮廓中不存在需合并的轮廓,则分割后轮廓即为目标车道线轮廓;若分割后轮廓中存在需合并的轮廓,则将需合并的轮廓进行合并,获得合并车道线轮廓,将分割后轮廓中不需要进行合并的车道线轮廓和合并得到的合并车道线轮廓作为目标车道线轮廓。
在一个实施例中,车道线语义图中还包括各像素点的类别信息,类别信息用于指示像素点所属的车道线类别;获得目标车道线轮廓后,还可以根据各目标车道线轮廓中各像素点的类别信息,确定各目标车道线轮廓对应的车道线类别。
具体地,对于每一目标车道线轮廓,统计目标车道线轮廓中各类别信息对应的像素点数量,将对应的像素点数量最多的类别信息所指示的车道线类别,确定为目标车道线轮廓对应的车道线类别。
举例来说,类别信息为车道线语义图中的颜色信息,车道线类别包括实线和虚线,用红色指示实线类别,用绿色指示虚线类别。具体地,可以根据像素点的RGB值判定该像素点属于实线还是虚线,若像素点的R通道的值与G通道的值的差值大于预设阈值,认为该像素点对应实线类别,若像素点的G通道的值与R通道的值的差值大于预设阈值,认为该像素点对应虚线类别。对于每一目标车道线轮廓,统计实线类别和虚线类别分别对应的像素点数量,若实线类别对应的像素点数量大于虚线类别对应的像素点数量,则判定目标车道线轮廓对应的车道线类别为实线,若虚线类别对应的像素点数量大于实线类别对应的像素点数量,则判定目标车道线轮廓对应的车道线类别为虚线。
上述实施例中,通过目标车道线轮廓所包含的各像素点的类别信息判定对应的车道线类别,从而不仅能够检测出车道线场景图中车道线的位置,还可以识别出车道线类别,使车道线检测结果更为全面。
应该理解的是,虽然图1、4、5的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图1、4、5中的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。
在一个实施例中,如图9所示,提供了一种车道线标注装置900,包括:标注点信息获取模块910、待绘制点信息确定模块920、标注线条信息确定模块930和标注线条绘制模块940,其中:
标注点信息获取模块910,用于基于车道线场景图获取各车道线上的标注点的位置信息。
待绘制点信息确定模块920,用于根据各车道线上的标注点的位置信息,确定各车道线对应的待绘制点的位置信息。
标注线条信息确定模块930,用于基于各车道线对应的待绘制点的位置信息,确定各车道线对应的标注线条在各待绘制点处的粗细信息。
标注线条绘制模块940,用于根据各车道线对应的待绘制点的位置信息、以及标注线条在各待绘制点处的粗细信息,绘制各车道线对应的标注线条,获得车道线场景图对应的车道线语义标签图。
在一个实施例中,待绘制点信息确定模块920,具体用于:根据各车道线上的标注点中的相邻标注点的位置信息进行线性插值,获得相邻标注点之间的插值点的位置信息;基于各车道线上的标注点以及插值点的位置信息,确定各车道线对应的待绘制点的位置信息。
在一个实施例中,位置信息包括基于车道线场景图建立的坐标系下第一坐标轴方向的第一坐标值,第一坐标轴方向表示与车道线延伸方向对应的方向;标注线条信息确定模块930,具体用于:基于各车道线对应的各待绘制点的第一坐标值大小,确定各车道线对应的标注线条在各待绘制点处的粗细大小,使得标注线条的粗细大小沿对应的车道线延伸方向递减。
在一个实施例中,标注点信息获取模块910,还用于基于车道线场景图获取各车道线上的标注点的类别信息;待绘制点信息确定模块920,还用于根据各车道线上的标注点的类别信息,获得各车道线对应的待绘制点的类别信息;标注线条绘制模块940,具体用于根据各车道线对应的待绘制点的位置信息、类别信息以及标注线条在各待绘制点处的粗细信息,绘制各车道线对应的标注线条,获得车道线场景图对应的车道线语义标签图。
在一个实施例中,如图10所示,提供了一种车道线检测模型的确定装置1000,包括:样本获取模块1010、车道线标注模块1020、模型训练模块1030和模型确定模块1040,其中:
样本获取模块1010,用于获取样本车道线场景图。
车道线标注模块1020,用于采用前文任一实施例中的车道线标注方法对样 本车道线场景图进行车道线标注,获得样本车道线场景图对应的车道线语义标签图。
模型训练模块1030,用于基于样本车道线场景图以及车道线语义标签图,对待训练生成对抗网络模型进行训练,获得训练后的生成对抗网络模型。
模型确定模块1040,用于根据训练后的生成对抗网络模型中的生成器,确定车道线检测模型。
在一个实施例中,如图11所示,提供了一种车道线检测装置1100,包括:待检测图片获取模块1110、车道线检测模块1120和车道线确定模块1130,其中:
待检测图片获取模块1100,用于获取待检测车道线场景图。
车道线检测模块1120,用于采用前文任一实施例中的车道线检测模型的确定方法确定的车道线检测模型,对待检测车道线场景图进行车道线检测,获得车道线语义图,车道线语义图包括各像素点的位置信息。
车道线确定模块1130,用于基于车道线语义图中各像素点的位置信息,确定待检测车道线场景图中的车道线。
在一个实施例中,车道线确定模块1130包括:轮廓获取单元、第一判断单元、分割单元和确定单元。轮廓获取单元,用于基于车道线语义图中各像素点的位置信息,获得各连通区域的车道线轮廓;第一判断单元,用于对于每一车道线轮廓,根据车道线轮廓的轮廓点位置信息,判断车道线轮廓是否为粘连车道线轮廓;分割单元,用于当车道线轮廓为粘连车道线轮廓时,根据粘连车道线轮廓的轮廓点位置信息对粘连车道线轮廓进行分割,获得分割车道线轮廓;确定单元,用于根据车道线轮廓中的非粘连车道线轮廓和分割车道线轮廓,确定目标车道线轮廓,基于各目标车道线轮廓的轮廓点,确定待检测车道线场景图中的车道线。
在一个实施例中,轮廓点位置信息包括基于车道线语义图建立的坐标系下第一坐标轴方向的第一坐标值,第一坐标轴方向表示与车道线延伸方向对应的方向;第一判断单元具体用于:以车道线轮廓的任一轮廓点为起始点,按照车道线轮廓的轮廓方向,依次获取车道线轮廓的各轮廓点的第一坐标值;根据依次获得的各轮廓点的第一坐标值,获得第一坐标极大值和第一坐标极小值;根据第一坐标极大值数量以及第一坐标极小值的数量,判断车道线轮廓是否为粘 连车道线轮廓。
在一个实施例中,第一判断单元还用于当第一坐标极大值的数量以及第一坐标极小值的数量中,至少有一个数量大于1,判定车道线轮廓为粘连车道线轮廓。
在一个实施例中,分割单元具体用于:以任一第一坐标极大值对应的第一坐标极大值点为起始点,按照粘连车道线轮廓的轮廓方向,对各第一坐标极大值点进行排序;基于相邻序号的第一坐标极大值点之间的轮廓点,获得分割车道线轮廓。
在一个实施例中,确定单元具体用于:对于分割后轮廓中的任意两个车道线轮廓,根据两个车道线轮廓的轮廓点位置信息,判断两个车道线轮廓是否对应同一条车道线,分割后轮廓包括非粘连车道线轮廓和分割车道线轮廓;当两个车道线轮廓对应同一条车道线时,将两个车道线轮廓的轮廓点合并,获得合并车道线轮廓;根据分割后轮廓中对应不同条车道线的车道线轮廓和合并车道线轮廓,确定目标车道线轮廓。
在一个实施例中,第二判断单元具体用于:获取第一车道线轮廓的第一最低点和第一最高点的位置信息,获取第二车道线轮廓的第二最低点和第二最高点的位置信息,最低点和最高点基于各轮廓点的第一坐标值确定,第一最高点的第一坐标值大于或等于第二最高点的第一坐标值;分别对第一车道线轮廓和第二车道线轮廓的轮廓点进行直线拟合,获得第一拟合线和第二拟合线,分别计算第一拟合线和第二拟合线的斜率,获得第一斜率和第二斜率;根据第一斜率与第二斜率的斜率差值、第一车道线轮廓与第二车道线轮廓的第一距离、以及第一最低点与第二最高点的第二距离,判断两个车道线轮廓是否对应同一条车道线。
在一个实施例中,第一距离包括下述各项中的任意一项:第一最低点到第二拟合线的距离;第一最高点到第二拟合线的距离;第二最低点到第一拟合线的距离;第二最高点到第一拟合线的距离。第二判断单元还用于:当斜率差值小于第一阈值、且第一距离小于第二阈值、且第二距离小于第三阈值时,判定两个车道线轮廓对应同一条车道线。
在一个实施例中,车道线语义图中还包括各像素点的类别信息,类别信息 用于指示像素点所属的车道线类别。车道线确定模块1130还用于:根据各目标车道线轮廓中各像素点的类别信息,确定各目标车道线轮廓对应的车道线类别。
在一个实施例中,车道线确定模块1130具体用于:对于每一目标车道线轮廓,统计目标车道线轮廓中各类别信息对应的像素点数量,将对应的像素点数量最多的类别信息所指示的车道线类别,确定为目标车道线轮廓对应的车道线类别。
关于车道线标注、车道线检测模型的确定以及车道线检测装置的具体限定可以参见上文中对于车道线标注、车道线检测模型的确定以及车道线检测方法的限定,在此不再赘述。上述车道线标注、车道线检测模型的确定以及车道线检测装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图12所示。该计算机设备包括通过系统总线连接的处理器、存储器和网络接口。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统、计算机程序和数据库。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机程序被处理器执行时以实现一种车道线标注、车道线检测模型的确定以及车道线检测方法。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是终端,其内部结构图可以如图13所示。该计算机设备包括通过系统总线连接的处理器、存储器、通信接口、显示屏和输入装置。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统和计算机程序。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的通信接口用于与外部的终端进行有线或无线方式的通信,无线方式可通过WIFI、运营商网络、NFC(近场通信)或其他技术实现。该计算机程序被处理器执行时以实 现一种车道线标注、车道线检测模型的确定以及车道线检测方法。该计算机设备的显示屏可以是液晶显示屏或者电子墨水显示屏,该计算机设备的输入装置可以是显示屏上覆盖的触摸层,也可以是计算机设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。
本领域技术人员可以理解,图12或图13中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,提供了一种计算机设备,包括存储器和处理器,存储器中存储有计算机程序,该处理器执行计算机程序时实现上述各个方法实施例中的步骤。
在一个实施例中,提供了一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现上述各个方法实施例中的步骤。
在一个实施例中,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述各个方法实施例中的步骤。
需要理解的是,上述实施例中的术语“第一”、“第二”等仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存或光存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或外部高速缓冲存储器。作为说明而非局限,RAM可以是多种形式,比如静态随机存取存储器(Static Random Access Memory,SRAM)或动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (20)

  1. 一种车道线标注方法,其特征在于,所述方法包括:
    基于车道线场景图获取各车道线上的标注点的位置信息;
    根据各所述车道线上的标注点的位置信息,确定各所述车道线对应的待绘制点的位置信息;
    基于各所述车道线对应的待绘制点的位置信息,确定各所述车道线对应的标注线条在各所述待绘制点处的粗细信息;
    根据各所述车道线对应的待绘制点的位置信息、以及标注线条在各所述待绘制点处的粗细信息,绘制各所述车道线对应的标注线条,获得所述车道线场景图对应的车道线语义标签图。
  2. 根据权利要求1所述的方法,其特征在于,根据各所述车道线上的标注点的位置信息,确定各所述车道线对应的待绘制点的位置信息,包括:
    根据各所述车道线上的标注点中的相邻标注点的位置信息进行线性插值,获得所述相邻标注点之间的插值点的位置信息;
    基于各所述车道线上的标注点以及插值点的位置信息,确定各所述车道线对应的待绘制点的位置信息。
  3. 根据权利要求2所述的方法,其特征在于,所述位置信息包括基于所述车道线场景图建立的坐标系下第一坐标轴方向的第一坐标值,所述第一坐标轴方向表示与车道线延伸方向对应的方向;
    基于各所述车道线对应的待绘制点的位置信息,确定各所述车道线对应的标注线条在各所述待绘制点处的粗细信息,包括:
    基于各所述车道线对应的各待绘制点的第一坐标值大小,确定各所述车道线对应的标注线条在各所述待绘制点处的粗细大小,使得所述标注线条的粗细大小沿对应的车道线延伸方向递减。
  4. 根据权利要求1至3任意一项所述的方法,其特征在于,还包括:基于所述车道线场景图获取各车道线上的标注点的类别信息;根据各所述车道线上 的标注点的类别信息,获得各所述车道线对应的待绘制点的类别信息;
    根据各所述车道线对应的待绘制点的位置信息、以及标注线条在各所述待绘制点处的粗细信息,绘制各所述车道线对应的标注线条,获得所述车道线场景图对应的车道线语义标签图,包括:
    根据各所述车道线对应的待绘制点的位置信息、类别信息以及标注线条在各所述待绘制点处的粗细信息,绘制各所述车道线对应的标注线条,获得所述车道线场景图对应的车道线语义标签图。
  5. 一种车道线检测模型的确定方法,其特征在于,所述方法包括:
    获取样本车道线场景图;
    采用权利要求1至4任意一项所述的方法对所述样本车道线场景图进行车道线标注,获得所述样本车道线场景图对应的车道线语义标签图;
    基于所述样本车道线场景图以及所述车道线语义标签图,对待训练生成对抗网络模型进行训练,获得训练后的生成对抗网络模型;
    根据所述训练后的生成对抗网络模型中的生成器,确定车道线检测模型。
  6. 一种车道线检测方法,其特征在于,所述方法包括:
    获取待检测车道线场景图;
    采用权利要求5所述的方法确定的车道线检测模型,对所述待检测车道线场景图进行车道线检测,获得车道线语义图,所述车道线语义图中包括各像素点的位置信息;
    基于所述车道线语义图中各像素点的位置信息,确定所述待检测车道线场景图中的车道线。
  7. 根据权利要求6所述的方法,其特征在于,基于所述车道线语义图中各像素点的位置信息,确定所述待检测车道线场景图中的车道线,包括:
    基于所述车道线语义图中各像素点的位置信息,获得各连通区域的车道线 轮廓;
    对于每一车道线轮廓,根据所述车道线轮廓的轮廓点位置信息,判断所述车道线轮廓是否为粘连车道线轮廓;
    当所述车道线轮廓为粘连车道线轮廓时,根据所述粘连车道线轮廓的轮廓点位置信息对所述粘连车道线轮廓进行分割,获得分割车道线轮廓;
    根据所述车道线轮廓中的非粘连车道线轮廓和所述分割车道线轮廓,确定目标车道线轮廓,基于各所述目标车道线轮廓的轮廓点,确定所述待检测车道线场景图中的车道线。
  8. 根据权利要求7所述的方法,其特征在于,所述轮廓点位置信息包括基于所述车道线语义图建立的坐标系下第一坐标轴方向的第一坐标值,所述第一坐标轴方向表示与车道线延伸方向对应的方向;
    根据所述车道线轮廓的轮廓点位置信息,判断所述车道线轮廓是否为粘连车道线轮廓,包括:
    以所述车道线轮廓的任一轮廓点为起始点,按照所述车道线轮廓的轮廓方向,依次获取所述车道线轮廓的各轮廓点的第一坐标值;
    根据依次获得的各轮廓点的第一坐标值,获得第一坐标极大值和第一坐标极小值;
    根据所述第一坐标极大值数量以及所述第一坐标极小值的数量,判断所述车道线轮廓是否为粘连车道线轮廓。
  9. 根据权利要求8所述的方法,其特征在于,当所述第一坐标极大值的数量以及所述第一坐标极小值的数量中,至少有一个数量大于1,判定所述车道线轮廓为粘连车道线轮廓。
  10. 根据权利要求9所述的方法,其特征在于,根据所述粘连车道线轮廓的轮廓点位置信息对所述粘连车道线轮廓进行分割,获得分割车道线轮廓,包括:
    以任一第一坐标极大值对应的第一坐标极大值点为起始点,按照所述粘连车道线轮廓的轮廓方向,对各第一坐标极大值点进行排序;
    基于相邻序号的第一坐标极大值点之间的轮廓点,获得分割车道线轮廓。
  11. 根据权利要求7所述的方法,其特征在于,根据所述车道线轮廓中的非粘连车道线轮廓和所述分割车道线轮廓,确定目标车道线轮廓,包括:
    对于分割后轮廓中的任意两个车道线轮廓,根据所述两个车道线轮廓的轮廓点位置信息,判断所述两个车道线轮廓是否对应同一条车道线,所述分割后轮廓包括所述非粘连车道线轮廓和所述分割车道线轮廓;
    当所述两个车道线轮廓对应同一条车道线时,将所述两个车道线轮廓的轮廓点合并,获得合并车道线轮廓;
    根据所述分割后轮廓中对应不同条车道线的车道线轮廓和所述合并车道线轮廓,确定目标车道线轮廓。
  12. 根据权利要求11所述的方法,其特征在于,根据所述两个车道线轮廓的轮廓点位置信息,判断所述两个车道线轮廓是否对应同一条车道线,包括:
    获取第一车道线轮廓的第一最低点和第一最高点的位置信息,获取第二车道线轮廓的第二最低点和第二最高点的位置信息,所述最低点和所述最高点基于各轮廓点的第一坐标值确定,所述第一最高点的第一坐标值大于或等于所述第二最高点的第一坐标值;
    分别对所述第一车道线轮廓和所述第二车道线轮廓的轮廓点进行直线拟合,获得第一拟合线和第二拟合线,分别计算所述第一拟合线和所述第二拟合线的斜率,获得第一斜率和第二斜率;
    根据所述第一斜率与所述第二斜率的斜率差值、所述第一车道线轮廓与所述第二车道线轮廓的第一距离、以及所述第一最低点与所述第二最高点的第二距离,判断所述两个车道线轮廓是否对应同一条车道线。
  13. 根据权利要求12所述的方法,其特征在于,所述第一距离包括下述各 项中的任意一项:所述第一最低点到所述第二拟合线的距离;所述第一最高点到所述第二拟合线的距离;所述第二最低点到所述第一拟合线的距离;所述第二最高点到所述第一拟合线的距离;
    当所述斜率差值小于第一阈值、且所述第一距离小于第二阈值、且所述第二距离小于第三阈值时,判定所述两个车道线轮廓对应同一条车道线。
  14. 根据权利要求7至13中任一项所述的方法,其特征在于,所述车道线语义图中还包括各像素点的类别信息,所述类别信息用于指示像素点所属的车道线类别;
    所述方法还包括:根据各所述目标车道线轮廓中各像素点的类别信息,确定各所述目标车道线轮廓对应的车道线类别。
  15. 根据权利要求14所述的方法,其特征在于,根据各所述目标车道线轮廓中各像素点的类别信息,确定各所述目标车道线轮廓对应的车道线类别,包括:
    对于每一目标车道线轮廓,统计所述目标车道线轮廓中各类别信息对应的像素点数量,将对应的像素点数量最多的类别信息所指示的车道线类别,确定为所述目标车道线轮廓对应的车道线类别。
  16. 一种车道线标注装置,其特征在于,所述装置包括:
    标注点信息获取模块,用于基于车道线场景图获取各车道线上的标注点的位置信息;
    待绘制点信息确定模块,用于根据各所述车道线上的标注点的位置信息,确定各所述车道线对应的待绘制点的位置信息;
    标注线条信息确定模块,用于基于各所述车道线对应的待绘制点的位置信息,确定各所述车道线对应的标注线条在各所述待绘制点处的粗细信息;
    标注线条绘制模块,用于根据各所述车道线对应的待绘制点的位置信息、以及标注线条在各所述待绘制点处的粗细信息,绘制各所述车道线对应的标注 线条,获得所述车道线场景图对应的车道线语义标签图。
  17. 一种车道线检测模型的确定装置,其特征在于,所述装置包括:
    样本获取模块,用于获取样本车道线场景图;
    车道线标注模块,用于采用权利要求1至4任意一项所述的方法对所述样本车道线场景图进行车道线标注,获得所述样本车道线场景图对应的车道线语义标签图;
    模型训练模块,用于基于所述样本车道线场景图以及所述车道线语义标签图,对待训练生成对抗网络模型进行训练,获得训练后的生成对抗网络模型;
    模型确定模块,用于根据所述训练后的生成对抗网络模型中的生成器,确定车道线检测模型。
  18. 一种车道线检测装置,其特征在于,所述装置包括:
    待检测图片获取模块,用于获取待检测车道线场景图;
    车道线检测模块,用于采用权利要求5所述的方法确定的车道线检测模型,对所述待检测车道线场景图进行车道线检测,获得车道线语义图,所述车道线语义图包括各像素点的位置信息;
    车道线确定模块,用于基于所述车道线语义图中各像素点的位置信息,确定所述待检测车道线场景图中的车道线。
  19. 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至15中任一项所述的方法的步骤。
  20. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至15中任一项所述的方法的步骤。
PCT/CN2021/110183 2020-08-06 2021-08-03 车道线标注、检测模型确定、车道线检测方法及相关设备 WO2022028383A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010781121.XA CN114092903A (zh) 2020-08-06 2020-08-06 车道线标注、检测模型确定、车道线检测方法及相关设备
CN202010781121.X 2020-08-06

Publications (1)

Publication Number Publication Date
WO2022028383A1 true WO2022028383A1 (zh) 2022-02-10

Family

ID=80119998

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/110183 WO2022028383A1 (zh) 2020-08-06 2021-08-03 车道线标注、检测模型确定、车道线检测方法及相关设备

Country Status (2)

Country Link
CN (1) CN114092903A (zh)
WO (1) WO2022028383A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115033731A (zh) * 2022-07-04 2022-09-09 小米汽车科技有限公司 图像检索方法、装置、电子设备及存储介质
CN117893934A (zh) * 2024-03-15 2024-04-16 中国地震局地质研究所 一种改进的UNet3+网络无人机影像铁路轨道线检测方法与装置

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115497078B (zh) * 2022-11-15 2023-03-10 广汽埃安新能源汽车股份有限公司 车道线生成方法、装置、设备和计算机可读介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180341820A1 (en) * 2017-05-25 2018-11-29 Baidu Online Network Technology (Beijing) Co., Ltd. Method and Apparatus for Acquiring Information
CN109583393A (zh) * 2018-12-05 2019-04-05 宽凳(北京)科技有限公司 一种车道线端点识别方法及装置、设备、介质
CN109900279A (zh) * 2019-02-13 2019-06-18 浙江零跑科技有限公司 一种基于泊车位全局路由的停车场语义地图创建方法
CN110826412A (zh) * 2019-10-10 2020-02-21 江苏理工学院 高速公路能见度检测系统和方法
CN111212260A (zh) * 2018-11-21 2020-05-29 杭州海康威视数字技术股份有限公司 一种基于监控视频绘制车道线的方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180341820A1 (en) * 2017-05-25 2018-11-29 Baidu Online Network Technology (Beijing) Co., Ltd. Method and Apparatus for Acquiring Information
CN111212260A (zh) * 2018-11-21 2020-05-29 杭州海康威视数字技术股份有限公司 一种基于监控视频绘制车道线的方法及装置
CN109583393A (zh) * 2018-12-05 2019-04-05 宽凳(北京)科技有限公司 一种车道线端点识别方法及装置、设备、介质
CN109900279A (zh) * 2019-02-13 2019-06-18 浙江零跑科技有限公司 一种基于泊车位全局路由的停车场语义地图创建方法
CN110826412A (zh) * 2019-10-10 2020-02-21 江苏理工学院 高速公路能见度检测系统和方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115033731A (zh) * 2022-07-04 2022-09-09 小米汽车科技有限公司 图像检索方法、装置、电子设备及存储介质
CN117893934A (zh) * 2024-03-15 2024-04-16 中国地震局地质研究所 一种改进的UNet3+网络无人机影像铁路轨道线检测方法与装置

Also Published As

Publication number Publication date
CN114092903A (zh) 2022-02-25

Similar Documents

Publication Publication Date Title
US10970566B2 (en) Lane line detection method and apparatus
WO2022028383A1 (zh) 车道线标注、检测模型确定、车道线检测方法及相关设备
CN111178236B (zh) 一种基于深度学习的车位检测方法
Kong et al. General road detection from a single image
US20180204070A1 (en) Image processing apparatus and image processing method
TWI438729B (zh) 車道偏移警示方法及系統
CN111191611B (zh) 基于深度学习的交通标志标号识别方法
CN105046198B (zh) 一种车道检测方法
CN108090423A (zh) 一种基于热力图和关键点回归的深度车牌检测方法
CN110232379A (zh) 一种车辆姿态检测方法及系统
CN104463138A (zh) 基于视觉结构属性的文本定位方法及系统
CN111832388B (zh) 一种车辆行驶中交通标志检测与识别方法及系统
CN111461036A (zh) 一种利用背景建模增强数据的实时行人检测方法
CN113029185B (zh) 众包式高精度地图更新中道路标线变化检测方法及系统
CN109615610B (zh) 一种基于YOLO v2-tiny的医用创可贴瑕疵检测方法
Wang et al. Lane detection based on two-stage noise features filtering and clustering
CN107564031A (zh) 基于反馈背景提取的城市交通场景前景目标检测方法
CN109325487B (zh) 一种基于目标检测的全种类车牌识别方法
CN109961420A (zh) 基于多子图融合与显著性分析的车辆检测方法
CN114511832A (zh) 车道线分析方法、装置、电子设备及存储介质
CN111583341B (zh) 云台像机移位检测方法
CN115393379A (zh) 一种数据标注方法及相关产品
CN113191255A (zh) 一种基于移动机器人的交通标志识别方法
CN105930813A (zh) 一种在任意自然场景下检测行文本的方法
Wang et al. RGB-D SLAM Method Based on Object Detection and K-Means

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21853400

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21853400

Country of ref document: EP

Kind code of ref document: A1