CN115050003A - Traffic cone line detection method, device, equipment and storage medium - Google Patents

Traffic cone line detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN115050003A
CN115050003A CN202210602966.7A CN202210602966A CN115050003A CN 115050003 A CN115050003 A CN 115050003A CN 202210602966 A CN202210602966 A CN 202210602966A CN 115050003 A CN115050003 A CN 115050003A
Authority
CN
China
Prior art keywords
traffic cone
line
traffic
road image
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210602966.7A
Other languages
Chinese (zh)
Inventor
陈世佳
韩旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Weride Technology Co Ltd
Original Assignee
Guangzhou Weride Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Weride Technology Co Ltd filed Critical Guangzhou Weride Technology Co Ltd
Priority to CN202210602966.7A priority Critical patent/CN115050003A/en
Publication of CN115050003A publication Critical patent/CN115050003A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses a traffic cone line detection method, a device, equipment and a storage medium, wherein the method comprises the following steps: acquiring a road image to be marked, wherein the road image to be marked comprises at least one group of traffic cones; the intersection point of each traffic cone and the ground in the road image to be marked is used as a marking point, a traffic cone line is drawn on the road image to be marked based on the marking point to obtain a marked road image, and a traffic cone line ordered point set of the traffic cone line is generated; the road image to be detected is processed based on the marked road image and the traffic cone line ordered point set to obtain a traffic cone line detection result, and the technical problems that in the prior art, the related traffic cones are connected into groups based on preset rules, the consistency result is difficult to obtain, the connection result is easy to fluctuate, and the robustness is low are solved.

Description

Traffic cone line detection method, device, equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a traffic cone detection method, apparatus, device, and storage medium.
Background
Due to temporary traffic problems, traffic cones are usually arranged in roads, and are placed singly or in groups to change the road traffic state, and the traffic cones are an important basis for automatic vehicle driving decision making. In the prior art, traffic cones are generally used as a kind of unique obstacles for detection, and then the associated traffic cones are connected into groups based on preset rules according to the detected traffic cones, the direction of a main vehicle, the position and the like, and are provided for decision making of an automatic driving vehicle. The method is difficult to obtain a consistent result in the face of different traffic conditions, particularly under complex road conditions, and the connection result is easy to fluctuate and has low robustness.
Disclosure of Invention
The application provides a traffic cone line detection method, a traffic cone line detection device, traffic cone line detection equipment and a traffic cone line detection storage medium, which are used for solving the technical problems that in the prior art, consistency results are difficult to obtain due to the fact that related traffic cones are connected into groups based on preset rules, the connection results are prone to fluctuation, and robustness is low.
In view of the above, a first aspect of the present application provides a traffic cone marking method, including:
acquiring a road image to be marked, wherein the road image to be marked comprises at least one group of traffic cones;
taking the intersection point of each traffic cone and the ground in the road image to be marked as a marking point, drawing a traffic cone line on the road image to be marked based on the marking point to obtain a marked road image, and generating a traffic cone line ordered point set of the traffic cone line;
and processing the road image to be detected based on the marked road image and the traffic cone line ordered point set to obtain a traffic cone line detection result.
Optionally, the processing the road image to be detected based on the labeled road image and the traffic cone line ordered point set to obtain a traffic cone line detection result includes:
calculating the target offset of each traffic cone line through the marked road image and the traffic cone line ordered point set;
training a preset convolutional neural network by taking the marked road image as input data and taking the traffic cone line type, the target offset and the traffic cone line length as learning targets to obtain a preset detection model, wherein the traffic cone line type and the traffic cone line length are obtained by marking;
and carrying out traffic cone line detection on the road image to be detected through the preset detection model to obtain a traffic cone line detection result.
Optionally, the calculating the target offset of each traffic cone through the labeled road image and the ordered set of traffic cone includes:
calculating pixel coordinates of the traffic cone line characteristic diagram projected by the target point according to the size of the marked road image and the size of the traffic cone line characteristic diagram extracted by the preset convolution neural network by taking a point which is half of the accumulated length of each traffic cone line in the traffic cone line ordered point set as the target point;
calculating the target point offset of each traffic cone line according to the pixel coordinates of the target point of each traffic cone line in the traffic cone line characteristic diagram;
calculating the offset of the sampling point except the target point in each traffic cone line in the traffic cone line ordered point set and the corresponding target point to obtain the offset of the sampling point;
and taking the target point offset and the sampling point offset of each traffic cone as target offsets, or normalizing the sampling point offsets of each traffic cone, and taking the target point offset and the normalized sampling point offset of each traffic cone as target offsets.
Optionally, the detecting the traffic cone lines of the road image to be detected through the preset detection model to obtain a traffic cone line detection result, including:
carrying out traffic cone line detection on the road image to be detected through the preset detection model, and outputting traffic cone line existence probability, traffic cone line category probability, target offset predicted value and traffic cone line length predicted value of each pixel point in a traffic cone line characteristic diagram extracted by the preset detection model;
extracting target pixel points of which the existence probability of the traffic cone lines is higher than an existence probability threshold value from a traffic cone line characteristic diagram output by the preset detection model;
determining the traffic cone line category of the target pixel point according to the traffic cone line category probability of the target pixel point, and extracting a traffic cone line example with the traffic cone line category probability higher than a category probability threshold value from the target pixel point;
acquiring the length of the traffic cone line example according to the predicted traffic cone line length value;
and acquiring the pixel coordinates of each point in the traffic cone line example in the image to be detected according to the target deviation predicted value to obtain a traffic cone line sequence.
Optionally, the method further includes:
calculating the distance between each traffic cone in the traffic cone sequence;
and when the distance between the two traffic cone lines is smaller than a preset distance threshold value, eliminating the traffic cone line sequence with the lower traffic cone line category probability to obtain a final traffic cone line sequence.
Optionally, the processing the road image to be detected based on the labeled road image and the traffic cone line ordered point set to obtain a traffic cone line detection result includes:
expanding the connecting line of the marking points in the traffic cone line ordered point set to a segmentation marking with a preset width through pixels to obtain a target label of the marked road image;
taking the marked road image as input data, and taking the target label as a network learning target to train a preset convolutional neural network to obtain a preset segmentation model;
carrying out traffic cone line segmentation on the road image to be detected through the preset segmentation model, and outputting a traffic cone line segmentation result;
clustering traffic cone line pixel points in the traffic cone line segmentation result to obtain a plurality of traffic cone line examples;
and fitting the traffic cone line pixel points in each traffic cone line example to obtain the detected traffic cone lines.
Optionally, the processing the road image to be detected based on the labeled road image and the traffic cone line ordered point set to obtain a traffic cone line detection result, and the method further includes:
judging whether the number of the marking points in the traffic cone line ordered point set is equal to the number of preset sampling points or not;
if not, sequentially connecting two adjacent marking points in the traffic cone line ordered point set to obtain a line segment vector group;
calculating the curvature radius of the circular arc corresponding to two adjacent line segment vectors in the line segment vector group;
and processing the traffic cone line ordered point set according to the curvature radius, so that the final number of the marking points of the traffic cone line ordered point set is equal to the number of the preset sampling points.
Optionally, the processing the traffic cone-line ordered point set according to the curvature radius, so that the final number of marked points of the traffic cone-line ordered point set is equal to the number of preset sampling points, includes:
when the number of the marking points of the traffic cone line ordered point set is smaller than the number of the preset sampling points, calculating a first number and a second number according to the difference value between the number of the preset sampling points and the number of the marking points of the traffic cone line ordered point set and the number of the line segment vectors;
sampling a second quantity of sampling points on the previous line segment vector in the adjacent two line segment vectors corresponding to the first quantity of minimum curvature radii, and sampling the second quantity of sampling points on the rest line segment vectors, so that the final marking point quantity of the traffic cone line ordered point set is equal to the preset sampling point quantity, and the final marking points comprise original marking points and newly added sampling points;
when the number of the marking points of the traffic cone line ordered point set is greater than the number of the preset sampling points, calculating the difference value between the number of the marking points of the traffic cone line ordered point set and the number of the preset sampling points to obtain a third number;
and deleting the line segment terminal point of the previous line segment vector in the adjacent two line segment vectors corresponding to the maximum curvature radius of the third number, so that the final number of the marking points of the traffic cone line ordered point set is equal to the number of the preset sampling points.
This application second aspect provides a traffic cone detection device, includes:
the system comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is used for acquiring a road image to be marked, and the road image to be marked comprises at least one group of traffic cones;
the marking unit is used for taking the intersection point of each traffic cone in the road image to be marked and the ground as a marking point, drawing a traffic cone line on the road image to be marked based on the marking point to obtain a marked road image and generating a traffic cone line ordered point set of the traffic cone line;
and the detection unit is used for processing the road image to be detected based on the marked road image and the traffic cone line ordered point set to obtain a traffic cone line detection result.
A third aspect of the present application provides a traffic cone detection device, the device comprising a processor and a memory;
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to execute any one of the traffic cone detection methods according to the first aspect according to instructions in the program code.
A fourth aspect of the present application provides a computer-readable storage medium for storing program code, which when executed by a processor implements the traffic cone detection method according to any one of the first aspects.
According to the technical scheme, the method has the following advantages:
the application provides a traffic cone line marking method, which comprises the following steps: acquiring a road image to be marked, wherein the road image to be marked comprises at least one group of traffic cones; the intersection point of each traffic cone and the ground in the road image to be marked is used as a marking point, a traffic cone line is drawn on the road image to be marked based on the marking point to obtain a marked road image, and a traffic cone line ordered point set of the traffic cone line is generated; and processing the road image to be detected based on the marked road image and the traffic cone line ordered point set to obtain a traffic cone line detection result.
In the method, traffic cone lines are drawn by taking the intersection points of the traffic cones and the ground in the road image to be labeled as labeling points, namely, the grouped traffic cones are labeled as an independent example, so that the high-order semantic consistency of the traffic cone connection relation is ensured, and the specific traffic cone connection relation expression is obtained; and then, traffic cone line detection is carried out on the road image to be detected based on the marked road image and the traffic cone line ordered point set obtained by marking, the grouped traffic cone lines can be integrally detected as an example, fluctuation is not easy to generate, the consistency, the continuity and the robustness of the traffic cone connection result can be effectively improved, and the technical problems that in the prior art, the related traffic cones are connected in groups based on preset rules, the consistency result is difficult to obtain, the connection result is easy to generate fluctuation, and the robustness is low are solved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic flow chart of a traffic cone marking method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of an annotated road image according to an embodiment of the present application;
FIG. 3 is another schematic diagram of a labeled road image according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a traffic cone marking device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a traffic cone marking device according to an embodiment of the present application.
Detailed Description
The application provides a traffic cone line detection method, a traffic cone line detection device, traffic cone line detection equipment and a traffic cone line detection storage medium, which are used for solving the technical problems that in the prior art, consistency results are difficult to obtain due to the fact that related traffic cones are connected into groups based on preset rules, the connection results are prone to fluctuation, and robustness is low.
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
For easy understanding, please refer to fig. 1, an embodiment of the present application provides a traffic cone marking method, including:
step 101, acquiring a road image to be marked.
The road image to be marked can be acquired through a vehicle sensor, and can also be acquired from a public database, wherein the road image to be marked comprises at least one group of traffic cones, and the group of traffic cones can be at least connected into a traffic cone line. The traffic cone comprises a common triangular cone, a metal rod, a metal column and the like.
And 102, taking the intersection point of each traffic cone and the ground in the road image to be marked as a marking point, drawing a traffic cone line on the road image to be marked based on the marking point to obtain a marked road image, and generating a traffic cone line ordered point set of the traffic cone line.
And taking the visible intersection points of the traffic cones and the ground in the road image to be marked as marking points, and connecting the marking points into a line, so that a virtual traffic cone line is generated in the road image to be marked, and a corresponding marked road image is obtained. After the traffic cone lines are marked, one end point of the two end points of the traffic cone lines is used as a starting point, the other end point is used as an end point, and the pixel coordinates of the marked points corresponding to the traffic cone lines are stored according to the sequence from the starting point to the end point, so that a corresponding traffic cone line ordered point set is generated. It can be understood that, during the labeling, the traffic cone category and the traffic cone length may be further labeled, and the traffic cone category may be labeled according to the traffic cone category, for example, the traffic cone corresponding to the triangular cone may be labeled as category one, and the traffic cone corresponding to the metal rod may be labeled as category two. In the embodiment of the application, the intersection point of the traffic cone and the ground is used as a marking point for connecting, so that the traffic cone line in the image is projected to a physical coordinate system for use of an automatic driving system conveniently by using ground plane constraint.
Referring to the labeled road image provided in fig. 2, a group of traffic cones in fig. 2 includes 3 traffic cones, there are 3 corresponding labeling points, and two adjacent labeling points are sequentially connected to obtain a traffic cone line. When the ordered point set of the traffic cone lines is generated, one end point in the traffic cone lines is used as a starting point, the other end point is used as an end point, and the pixel coordinates of all the marked points in the marked road image are sequentially saved from the starting point to the end point, so that the corresponding ordered point set of the traffic cone lines is generated. Referring to another marked road image provided in fig. 3, after the marked points are connected according to the connection manner in fig. 2, the obtained traffic cone has a bending degree of one segment which is approximately 90 °, the direction of the two segments in front and behind is greatly changed by the bending portion, at this time, the two traffic cone can be marked as two traffic cones, two traffic cones 1 and 2 shown in fig. 3 are obtained, and then the traffic cone ordered point set of each traffic cone is generated.
It should be noted that, in addition to directly performing the above-mentioned labeling process on the acquired road image to be labeled, the acquired road image to be labeled may also be subjected to inverse perspective transformation or other processing manners to transform the acquired road image to be labeled into a bird's-eye view, and then the above-mentioned labeling process is performed on the bird's-eye view, where the generated labeled road image is the bird's-eye view and the ordered set of traffic cone lines is the set of points on the bird's-eye view. Or directly converting the labeling result on the road image to be labeled into the labeling result on the aerial view, and then performing subsequent processing on the aerial view.
Further, consider that different traffic cone line ordered point sets may have a situation that the number of the labels is different, for example, a certain group of traffic cones contains 5 traffic cones, a corresponding traffic cone ordered point set includes 5 labels, another group of traffic cones contains 8 traffic cones, and a corresponding traffic cone ordered point set includes 8 labels. For convenience of subsequent processing, the ordered point sets of the traffic cone lines can be preprocessed to obtain the ordered point sets of the traffic cone lines with the uniform marking point quantity. The number of the sampling points can be preset according to actual conditions or experience to obtain the number of the preset sampling points. When the number of the marked points in the traffic cone line ordered point set is different from the preset sampling point number, the traffic cone line ordered point set can be preprocessed to obtain the traffic cone line ordered point set with the preset sampling point number.
In an embodiment, when the number of the marked points of the traffic cone-line ordered point set is less than the number of the preset sampling points, the f sampling points can be randomly interpolated in the traffic cone-line ordered point set directly according to the absolute difference value f between the number of the marked points and the number of the preset sampling points, so that the final number of the marked points of the traffic cone-line ordered point set is equal to the number of the preset sampling points. When the number of the marked points of the traffic cone line ordered point set is greater than the number of the preset sampling points, f marked points can be randomly deleted in the traffic cone line ordered point set directly according to the absolute difference value f between the number of the marked points and the number of the preset sampling points, and the starting point and the ending point in the traffic cone line ordered point set are generally kept, so that the final number of the marked points in the traffic cone line ordered point set is equal to the number of the preset sampling points.
In another embodiment, the traffic cone line ordered point set can be interpolated or part of points can be deleted through the curvature radius, and the ordered point set with the preset number of sampling points is obtained with the least precision loss. Specifically, the process of obtaining the traffic cone line ordered point set with the preset number of sampling points may be:
s1, judging whether the number of the marking points in the traffic cone line ordered point set is equal to the number of the preset sampling points or not;
and judging whether the number of the marking points in each traffic cone line ordered point set is equal to the number of the preset sampling points or not, and further determining whether the traffic cone line ordered point set needs to be processed or not. And when the number of the marked points in the traffic cone line ordered point set is equal to the data of the preset sampling point, keeping all the marked points.
S2, if not, sequentially connecting two adjacent marking points in the traffic cone line ordered point set to obtain a line segment vector group;
and when the number of the marked points in the traffic cone line ordered point set is not equal to the preset sampling point data, vectorizing the line segment formed by two adjacent marked points in the traffic cone line ordered point set. In particular, in the road image acquired by the vehicle, the traffic cone line (such as the lane line near the bottom of the image) near the vehicle is clearer, and the reliability and the uniformity of the labeling are better. Therefore, in the embodiment of the present application, the annotation point closest to the lower edge in the image is preferably used as the starting point of the traffic cone ordered point set, two adjacent annotation points are used as end points, the two adjacent annotation points are sequentially connected to generate a line segment vector group, and each line segment vector in the line segment vector group can be sequentially represented as L 1 、L 2 、...、L n Where n is the number of line segment vectors. Suppose there is A (x) 1 ,y 1 )、B(x 2 ,y 2 )、C(x 3 ,y 3 ) Three marking points, wherein point A is a starting point, point C is an end point, two adjacent marking points are connected in sequence, and the obtained line segment vector group comprises line segment vectors
Figure BDA0003670373330000081
And the line segment vector
Figure BDA0003670373330000082
Line segment vector
Figure BDA0003670373330000083
Can be represented as L 1 (x 2 -x 1 ,y 2 -y 1 ) And the line segment vector
Figure BDA0003670373330000084
Can be represented as L 2 (x 3 -x 2 ,y 3 -y 2 )。
S3, calculating the curvature radius of the circular arc corresponding to two adjacent line segment vectors in the line segment vector group;
after the line segment vector group is obtained, the included angle between two adjacent line segment vectors in the line segment vector group is calculated. In one embodiment, the cosine value of the angle between two adjacent segment vectors, such as segment vector L, may be calculated according to the inner product and the modular length of the two adjacent segment vectors in the segment vector group n-1 And the line segment vector L n The cosine of the angle of n-1 =(L n-1 ·L n )/(|L n-1 ||L n I)); calculating the included angle of two adjacent line segment vectors by the cosine value of the included angle of two adjacent line segment vectors, such as line segment vector L n-1 And the line segment vector L n Is theta n-1 =arccos((L n-1 ·L n )/(|L n-1 ||L n |)) in which n>2。
In another embodiment, the sine of the angle between two adjacent line segment vectors, such as line segment vector L, can be calculated according to the cross product and the module length of two adjacent line segment vectors in the line segment vector group n-1 And the line segment vector L n Sine value of the angle of sin theta n-1 =|L n-1 ×L n |/(|L n-1 ||L n I)); calculating the angle between two adjacent line segment vectors, such as line segment vector L, by the sine value of the angle between two adjacent line segment vectors n-1 And the line segment vector L n Angle of (theta) n-1 Comprises the following steps:
Figure BDA0003670373330000091
in another embodiment, the included angle between two adjacent line segment vectors may also be calculated according to the inner product and the modular length of the two adjacent line segment vectors in the line segment vector groupCosine values, e.g. line vectors L n-1 And the line segment vector L n The cosine of the angle of n-1 =(L n-1 ·L n )/(|L n-1 ||L n |); calculating sine value of included angle between two adjacent line segment vectors according to cross product and modular length of two adjacent line segment vectors, such as line segment vector L n-1 And the line segment vector L n Sine value of the angle of sin theta n-1 =|L n-1 ×L n |/(|L n-1 ||L n I)); calculating tangent value of included angle of two adjacent line segment vectors, such as line segment vector L, by cosine value and sine value of included angle n-1 And the line segment vector L n Tan theta is the angle tangent n-1 =sinθ n-1 /cosθ n-1 (ii) a Calculating the angle between two adjacent line segment vectors by the tangent of the angle between two adjacent line segment vectors, such as line segment vector L n-1 And the line segment vector L n Is theta n-1 =arctan(sinθ n-1 /cosθ n-1 ). Of course, the cross product and the inner product of two adjacent line segment vectors in the line segment vector group can also be directly calculated and then the formula θ is used to calculate the cross product and the inner product n-1 =arctan(|L n-1 ×L n |/L n-1 ·L n ) And calculating to obtain the included angle between two adjacent line segment vectors.
The value range of the included angle calculated by the process is [0,180 DEG ]]After the included angle between two adjacent line segment vectors is obtained through calculation, the line segments corresponding to the two adjacent line segment vectors are taken as the tangent lines of the arc starting point and the arc ending point to determine the arcs corresponding to the two adjacent line segment vectors, and the curvature radius of the arcs corresponding to the two adjacent line segment vectors can be calculated according to the modular length and the included angle of the two adjacent line segment vectors. Specifically, a smooth circular arc is constructed from the starting point of the previous segment vector to the end point of the next segment vector in the two adjacent segment vectors, the segment corresponding to the two adjacent segment vectors is taken as the tangent of the starting point of the circular arc and the tangent of the end point of the circular arc, so that the only circular arc corresponding to the two adjacent segment vectors can be determined, and the curvature radius r of the known circular arc is equal to the central angle between the arc length of the circular arc and the circular arc (namely, the two adjacent segment vectors corresponding to the circular arc are equal to the central angle between the arc length of the circular arc and the circular arc (namely, the two adjacent segment vectors corresponding to the circular arc)Angle θ) of the vectors, the arc length of the arc corresponding to the angle of the two adjacent line segment vectors is approximated by the sum of the module lengths of the two adjacent line segment vectors in the embodiment of the present application, and the curvature radius of the arc corresponding to the two adjacent line segment vectors is further calculated, that is, the two adjacent line segment vectors L are calculated n-1 And L n The radius of curvature of the corresponding arc is r n-1 =(|L n-1 |+|L n |)/θ n-1 . After the curvature radii of the arcs corresponding to the two adjacent line segment vectors in the line segment vector group are obtained through calculation, the corresponding relation between each curvature radius and the two adjacent line segment vectors can be associated. Assume that there are 3 segment vectors L in the segment vector group 1 、L 2 、L 3 Adjacent line segment vector L 1 And L 2 A unique circular arc L can be determined 1 L 2 According to the line segment vector L 1 And L 2 The length and included angle of the die can be calculated to obtain the corresponding arc L 1 L 2 Radius of curvature r of 1 I.e. radius of curvature r 1 And the arc L 1 L 2 There is a corresponding relationship, arc L 1 L 2 And adjacent line segment vector L 1 And L 2 There is a correspondence relationship so that the radius of curvature r can be correlated 1 And adjacent line segment vector L 1 And L 2 I.e. radius of curvature r 1 The corresponding adjacent line segment vector is a line segment vector L 1 And L 2 (ii) a Similarly, adjacent line segment vectors L 2 And L 3 A unique circular arc L can be determined 2 L 3 According to the line segment vector L 2 And L 3 The length and included angle of the die can be calculated to obtain the corresponding arc L 2 L 3 Radius of curvature r of 2 I.e. radius of curvature r 2 And the arc L 2 L 3 There is a corresponding relationship, arc L 2 L 3 And adjacent line segment vector L 2 And L 3 There is a correspondence relationship so that the radius of curvature r can be correlated 2 And adjacent line segment vector L 2 And L 3 I.e. radius of curvature r 2 The corresponding adjacent line segment vector is a line segment vector L 2 And L 3
S4, processing the traffic cone line according to the curvature radius, and enabling the final number of the marked points of the traffic cone line to be equal to the number of the preset sampling points, so that an ordered point set of the number of the preset sampling points corresponding to the traffic cone line image is obtained.
When the number of the marking points of the traffic cone line ordered point set is smaller than the number of the preset sampling points, interpolating the traffic cone line ordered point set according to the curvature radius, so that the number of the final marking points of the traffic cone line ordered point set is equal to the number of the preset sampling points, and the final marking points comprise original marking points and newly added sampling points.
Specifically, when the number N of labeled points of the traffic cone line ordered point set is less than the preset number M of sampling points, a first number M ═ N/N and a second number k ═ M-N%; and interpolating a second number of sampling points for the previous line segment vector in the two adjacent line segment vectors corresponding to the first number of minimum curvature radii, and interpolating a second number of sampling points for the remaining line segment vectors (i.e. the line segment vectors which are not interpolated), so that the number of the final labeling points of the traffic cone line ordered point set is equal to the number of the preset sampling points. The curvature radiuses can be sorted in ascending order (or in descending order) according to the sizes, then m +1 points can be uniformly interpolated for the previous line segment vector in the two adjacent line segment vectors corresponding to the first k minimum curvature radiuses sorted in ascending order, and m points can be uniformly interpolated for each line segment vector in the remaining n-k line segment vectors. The smaller the curvature radius is, the larger the bending degree of the line segment corresponding to the curvature radius is, and when the number of the marking points of the traffic cone line ordered point set is smaller than the number of the preset sampling points, the more marking points are required.
When the number of the marking points of the traffic cone line ordered point set is greater than the number of the preset sampling points, part of the marking points of the traffic cone line ordered point set can be deleted according to the curvature radius, so that the final number of the marking points of the traffic cone line ordered point set is equal to the number of the preset sampling points. Specifically, when the number N of the marking points of the traffic cone line ordered point set is greater than the number M of the preset sampling points, calculating the difference N-M between the number of the marking points of the traffic cone line ordered point set and the number of the preset sampling points to obtain a third number N-M; and deleting the line segment terminal point of the previous line segment vector in the adjacent two line segment vectors corresponding to the maximum curvature radius of the third number of N-M, so that the final number of the marking points of the traffic cone line ordered point set is equal to the number of the preset sampling points. When the number N of the labeled points of the traffic cone ordered point set is greater than the number M of the preset sampling points, it is described that N-M labeled points in the traffic cone ordered point set need to be deleted, all the curvature radii can be sorted in a descending order (or in an ascending order) according to the sizes, then the segment end point of the previous segment vector in the two adjacent segment vectors corresponding to the first N-M maximum curvature radii sorted in the descending order is deleted, at this time, the start point and the end point of the traffic cone ordered point set always remain, that is, the deleted labeled point is the middle point of the segment. The larger the curvature radius is, the closer the corresponding line segment is to the straight line, when the number of the marked points needs to be reduced, the middle point of the line segment closest to the straight line can be preferentially deleted, and the original shape of the traffic cone line can be kept as much as possible by smaller errors; although precision loss exists in comparison with the original marked points when part of marked points of the traffic cone ordered point set are deleted, the loss in the embodiment of the application is generated in the line segment part closest to the straight line, and the influence on the precision of the point set is limited.
After the number of the sampling points is determined, the traffic cone lines in different forms can be represented by a unified ordered point set through the processing mode, the number of points is usually not too large, compared with the method for interpolating or deleting the marked points on the traffic cone line ordered point set in an undifferentiated mode, the method for processing the traffic cone lines based on the curvature radius can more accurately represent the traffic cone lines in various positions and shapes by using relatively fewer points, and the precision loss of the traffic cone lines relative to the original ordered point set is smaller.
In the embodiment of the application, through connecting two adjacent mark points in the traffic cone line ordered point set, in order to generate the line segment vector group, in order to calculate the radius of curvature of the circular arc that two adjacent line segment vectors correspond in each line segment vector group, and then can handle the traffic cone line ordered point set through the radius of curvature, make the final mark point quantity of traffic cone line ordered point set unified with the predetermined sampling point quantity, realized the unified expression of different mark road surface ordered point sets, actual need has been satisfied better.
And 103, processing the road image to be detected based on the marked road image and the traffic cone line ordered point set to obtain a traffic cone line detection result.
In one embodiment, a preset segmentation model can be trained through the labeled road image and the traffic cone line ordered point set, then the traffic cone line segmentation processing is performed on the road image to be detected through the preset segmentation model, then the segmentation results are clustered and fitted, and the traffic cone line detection result is obtained, and the specific process can be as follows:
a1, expanding the connection line of the marking points in the traffic cone line ordered point set to a segmentation marking with preset width through pixels to obtain a target label of the marked road image;
the traffic cone line can be detected by means of image segmentation, so that before the preset convolutional neural network is trained, the traffic cone line is firstly subjected to segmentation labeling by pixel expansion to a certain width, namely, a connecting line of labeling points in the traffic cone line ordered point set obtained by labeling can be subjected to segmentation labeling by pixel expansion to a preset width, namely, the traffic cone line is a connected region in the image, and the connected region corresponding to each traffic cone line in the labeled road image is labeled as a target label of the labeled road image.
A2, training a preset convolutional neural network by taking the marked road image as input data and a target label as a network learning target to obtain a preset segmentation model;
the method comprises the steps that a marked road image is used as input data, a target label is used as a network learning target, a preset convolutional neural network is trained, in the training process, the preset convolutional neural network can carry out feature extraction and semantic segmentation on the input marked road image to obtain a segmentation result, and the segmentation result is represented by a traffic cone line connected region at a pixel level; calculating a loss value according to the segmentation result of the labeled road image in the training process and the corresponding target label, updating the network parameters of the preset convolutional neural network through the loss value until the training iteration number reaches the maximum iteration number or the training error is basically kept unchanged and the training error is lower than a preset error threshold value, judging that the preset convolutional neural network is converged to obtain a trained preset convolutional neural network model, and taking the trained preset convolutional neural network model as a preset segmentation model. The existing convolutional neural network can be used as a preset convolutional neural network for training, the network structure of the preset convolutional neural network is not particularly limited, and a person skilled in the art can select the network structure according to actual conditions.
A3, carrying out traffic cone line segmentation on the road image to be detected through a preset segmentation model, and outputting a traffic cone line segmentation result;
the method comprises the steps of inputting a road image to be detected into a preset segmentation model to carry out traffic cone segmentation so as to segment the traffic cone from a background to obtain a traffic cone segmentation result, wherein the traffic cone segmentation result is represented by a traffic cone connected domain at a pixel level, and pixel points in the traffic cone connected domain representation are traffic cone pixel points.
A4, clustering traffic cone line pixel points in the traffic cone line segmentation result to obtain a plurality of traffic cone line examples;
after the traffic cone segmentation result is obtained, clustering is carried out on traffic cone pixel points in the traffic cone connected domain representation, the clustering result is that the traffic cone pixel points of the same category form a traffic cone instance, and when a plurality of categories of traffic cones exist, the traffic cone instance corresponds to the plurality of traffic cone instances. The specific clustering method is not specifically limited, and those skilled in the art can select the clustering method according to actual situations.
And A5, fitting the traffic cone line pixel points in each traffic cone line example to obtain the detected traffic cone lines.
After the traffic cone examples are obtained through clustering, fitting is carried out on the traffic cones in each traffic cone example, and polynomial fitting can be carried out, so that the traffic cones, namely the traffic cone detection results, are obtained. Wherein, a traffic cone example corresponds to a traffic cone. In order to achieve a detection effect with higher precision, a larger output resolution is usually required, and when the network structure of the preset segmentation model is more complex, the high output resolution can greatly increase the calculated amount of the preset segmentation model, so that the real-time property of a detection result is not ideal, and the requirement is difficult to meet for some systems with higher requirements on the real-time property.
In another embodiment, a preset detection model can be directly trained through the marked road image and the traffic cone line ordered point set, end-to-end traffic cone line detection is performed on the road image to be detected through the preset detection model, and a traffic cone line detection result is obtained, and the specific process can be as follows:
b1, calculating the target offset of each traffic cone line through the marked road image and the traffic cone line ordered point set;
in the embodiment of the application, when the preset convolutional neural network is trained, a network learning target of the preset convolutional neural network is constructed by taking a traffic cone line example as a unit, wherein the traffic cone line example can be specifically expressed as follows: and taking a point which is half of the accumulated length of each traffic cone in the traffic cone ordered point set as a target point of the corresponding traffic cone instance, and generating the traffic cone category of each traffic cone instance according to actual requirements. Wherein, a traffic cone corresponds to a traffic cone instance.
In the embodiment of the present application, it is preferable to collect points (x) of half the cumulative length of each traffic cone in the ordered set of points c ,y c ) Calculating pixel coordinates of a traffic cone line characteristic diagram projected by the target point according to the size img _ width _ img _ length of the marked road image and the size F _ width _ F _ length of the traffic cone line characteristic diagram extracted by the preset convolutional neural network; and calculating the target point offset of each traffic cone line according to the pixel coordinates of the target point of each traffic cone line in the traffic cone line characteristic diagram. Specifically, the sampling multiple is calculated according to the size of the marked road image and the size of the traffic cone line characteristic diagram extracted by the preset convolutional neural network, and the sampling multiple is acquiredThe sample multiple comprises an abscissa sampling multiple S x Img _ width/F _ width and ordinate sampling multiple S y Img _ length/F _ length; coordinates (x) of target points passing through each traffic cone c ,y c ) And calculating the target point offset of each traffic cone line by using the sampling multiple. Specifically, the abscissa value of the target point of each traffic cone is divided by the abscissa sampling multiple, and the ordinate value is divided by the ordinate sampling multiple, so as to obtain the middle value (x) of the two coordinates of the target point of each traffic cone g ,y g ) So as to project the target point in the original marked road image into the grid of the traffic cone line characteristic diagram, namely x g =x c /S x ,y g =y c /S y (ii) a Rounding the middle value of the two coordinates of the target point to obtain two coordinate index values (g) of the target point x ,g y ) I.e. g x =floor(x g ),g y =floor(y g ) Floor () is a floor function; calculating the difference between the two coordinate intermediate values of the target point of each traffic cone and the corresponding two coordinate index values to obtain the target point offset (c) of each traffic cone x ,c y ) I.e. c x =x g -g x ,c y =y g -g y The two coordinate values of the target point offset range from 0 to 1.
After calculating the target point offset of each traffic cone, firstly, dividing the abscissa value of each traffic cone except the target point by the sampling multiple of the abscissa, and dividing the ordinate value by the sampling multiple of the ordinate to obtain the coordinate (x) of the sampling point except the target point projected on the traffic cone characteristic diagram output by the preset convolution neural network i ,y i ) Then according to the pixel coordinates (x) of sampling points except the target point in each traffic cone in the traffic cone sequence point set on the traffic cone characteristic diagram i ,y i ) And calculating the offset of the sampling point and the corresponding target point on the traffic cone line characteristic diagram according to the pixel coordinates of the corresponding target point on the traffic cone line characteristic diagram to obtain the offset (O) of the sampling point xi ,O yi ) I.e. O xi =x i -x g ,O yi =y i -y g
In one embodiment, the target point offset and the sampling point offset of each traffic cone may be directly taken as the target offset. The sampling point offset of each traffic cone line is directly used as a target offset and input into a preset convolutional neural network for learning, the numerical range span is large, and the learning difficulty of the preset convolutional neural network is large.
In another embodiment, the offset of the sampling point of each traffic cone may be normalized, and the offset of the target point and the normalized offset of the sampling point may be used as the target offset. The sampling point offset of each traffic cone line is directly used as a network learning target, so that the numerical range span is large, and the learning difficulty is large. Therefore, the sampling point offset of each traffic cone can be normalized.
The abscissa value of the offset of the sampling point of each traffic cone line can be normalized by the width img _ width of the labeled road image, and the ordinate value of the offset of the sampling point of each traffic cone line can be normalized by the height img _ length of the labeled road image, and the numerical range is compressed to [0,1], but the method can increase relative errors for points with smaller offset values (i.e. shorter traffic cone lines).
Or, the two coordinate values of the offset of the sampling point of each traffic cone line can be normalized through the length of the traffic cone line, the traffic cone lines with different lengths can obtain the targets with the same distribution, and the learning difficulty can be reduced.
The length of the traffic cone line can be absolute length, or relative length normalized by the width, height or diagonal length of the marked road image.
B2, training a preset convolutional neural network by taking the marked road image as input data and taking the traffic cone line type, the target offset and the traffic cone line length as learning targets to obtain a preset detection model;
the preset convolutional neural network in the embodiment of the application comprises a feature extraction module and a prediction module, and an existing feature extraction network (such as a residual error network and the like) can be used as the feature extraction module, the feature extraction module is used for extracting a feature map of an input image, and the prediction module is used for detecting a traffic cone line on the feature map extracted by the feature extraction module, wherein the prediction module can be composed of a plurality of detection heads, the detection heads can be connected to any feature map of the feature extraction module or simultaneously connected to a plurality of feature maps with different scales and different features, and the feature map finally connected by the detection heads is the traffic cone line feature map pointed out in the foregoing. The detection head may specifically include a traffic cone existence detection head, a traffic cone category detection head, a target point offset detection head, a sampling point offset detection head, and a traffic cone length detection head. The traffic cone existence detection head is used for detecting whether a traffic cone exists in the input traffic cone characteristic diagram or not and outputting the traffic cone existence probability of each pixel point on the traffic cone characteristic diagram. If the traffic cone characteristic diagram is the same as the size of the input marked road image, one pixel point in the traffic cone characteristic diagram corresponds to one pixel point in the input marked road image, if the traffic cone characteristic diagram is S times of the size of the input marked road image after sampling, one pixel point in the traffic cone characteristic diagram corresponds to a pixel point in the S-S range in the input marked road image, and the value of each pixel point finally output by the traffic cone existence detection head comprises the existence probability of the traffic cone. The traffic cone line type detection head is used for detecting the type of the traffic cone line in an input traffic cone line characteristic diagram, the output is the traffic cone line type probability, and when the traffic cone line is independently used as a detection task, the traffic cone line type probability can be one-two classification or one-multiple classification; the traffic cone may also be detected along with other road markings (e.g., lane lines, roadside lines, etc.), where the detected categories are multiple categories. When each traffic cone line example is represented by multi-classification probability, each pixel point comprises a plurality of classifications of the output maximum traffic cone line number, when the maximum traffic cone line number is set to be 1, one pixel point corresponds to one traffic cone line example, and one traffic cone line example corresponds to one traffic cone line. The target point offset detection head is used for predicting the offset of the target point of each traffic cone line on the traffic cone line characteristic diagram and outputting a target point offset prediction value. The sampling point offset detection head is used for predicting the offset of sampling points in the traffic cone line except the target point and the target point on the traffic cone line characteristic diagram and outputting a sampling point offset prediction value. The traffic cone length detection head is used for predicting the length of each traffic cone and outputting a predicted value of the length of the traffic cone, wherein the predicted value can be absolute length or relative length normalized by the width, height or diagonal length of an image. It is understood that the prediction module in the embodiment of the present application may also use a visual transformer (visual transformer) to directly predict a specified number of traffic cone line instances instead of the detection head.
The method and the device for detecting the road traffic cone line in the traffic cone line model training process have the advantages that the marked road image is used as input data, the traffic cone line type, the target offset and the traffic cone line length are used as learning targets, the preset convolutional neural network is trained until the preset convolutional neural network converges, a trained preset convolutional neural network model is obtained, and the trained preset convolutional neural network model is used as a preset detection model.
The preset detection model in the embodiment of the application takes a traffic cone example as a unit to construct a training target, the traffic cone existence prediction and the category prediction can be respectively realized through the two classified detection heads, the traffic cone position and shape prediction can be realized through the offset detection head, the traffic cones with various shapes can be predicted, and the detection precision is high; in addition, according to the traffic cone line embodiment, the point which is half of the accumulated length of the traffic cone lines is taken as the target point of the traffic cone line example, any number of traffic cone lines can be processed, the existence prediction is consistently related to the actual image performance, and the association, combination and the like between the subsequent line segments are more flexible.
And B3, carrying out traffic cone line detection on the road image to be detected through a preset detection model to obtain a traffic cone line detection result.
Carrying out traffic cone line detection on a road image to be detected through a preset detection model, and outputting traffic cone line existence probability, traffic cone line category probability, a target offset predicted value and a traffic cone line length predicted value of each pixel point in a traffic cone line characteristic diagram extracted by the preset detection model; extracting target pixel points of which the existence probability of the traffic cone lines is higher than an existence probability threshold value from a traffic cone line characteristic diagram output by a preset detection model; determining the traffic cone line type of the target pixel point according to the traffic cone line type probability of the target pixel point, and extracting a traffic cone line example with the traffic cone line type probability higher than a type probability threshold value from the target pixel point; obtaining the length of the traffic cone line example through the predicted value of the traffic cone line length; and acquiring the pixel coordinates of each point in the traffic cone line example in the image to be detected according to the target deviation predicted value to obtain a traffic cone line sequence.
When analyzing the detection result output by the preset detection model, firstly, extracting target pixel points with the traffic cone existence probability higher than an existence probability threshold value from a traffic cone characteristic diagram output by the preset detection model according to the traffic cone existence probability, wherein the target pixel points with the traffic cone existence probability higher than the existence probability threshold value in the traffic cone characteristic diagram are points belonging to the traffic cone; then, determining the traffic cone line type of the target pixel point according to the traffic cone line type probability of the target pixel point, and extracting the traffic cone instance with the traffic cone class probability higher than the class probability threshold from the target pixel point, determining the traffic cone category to which the target pixel point belongs according to the traffic cone category probability of each traffic cone instance in each target pixel point on the traffic cone characteristic diagram, when a plurality of traffic cone line categories exist, the traffic cone line category detection head predicts the traffic cone line category probability of each mark category instance under each traffic cone line category, the sum of the traffic cone line category probabilities under each category is 1, the category with the maximum traffic cone line category probability value is the category to which the traffic cone line example belongs, and then the traffic cone line example with the traffic cone line category probability higher than the category probability threshold is extracted from the target pixel points. The traffic cone length of each traffic cone instance can be obtained through the predicted traffic cone length.
The target offsets in the embodiments of the present application include a target point offset and a sampling point offset. For each extracted traffic cone line example, firstly, the pixel coordinates of the target point of the traffic cone line example on the scale of the traffic cone line characteristic graph are obtained according to the target point offset. Specifically, when the preset convolution neural network is trained, the preset convolution neural network is obtained by directly adopting calculationIs used as a network learning target, and therefore, the target point deviation predicted value (c ') output from the target point deviation detection head is directly detected' x ,c' y ) And performing reverse calculation to obtain pixel coordinates (x ') of the target point of each traffic cone line example on the scale of the predicted traffic cone line characteristic diagram' g ,y' g ) Wherein, x' g =c' x +g x ,y' g =c' y +g y ,g x A transverse index value g of a target pixel point corresponding to each traffic cone line example on the predicted traffic cone line characteristic diagram y And longitudinal index values of target pixel points corresponding to the traffic cone line examples on the predicted traffic cone line characteristic graph are obtained.
And then, converting the pixel coordinates of the target point of the traffic cone example on the traffic cone characteristic diagram into the pixel coordinates on the road image to be detected according to the size of the road image to be detected and the size of the traffic cone characteristic diagram. Specifically, the sampling multiple S of the abscissa can be determined according to the size of the road image to be detected and the size of the traffic cone characteristic diagram x And the sampling multiple S of the ordinate y (ii) a Pixel coordinates (x ') of the target point of each traffic cone line example on the traffic cone line feature map' g ,y' g ) X 'of abscissa value' g Multiplied by the abscissa sampling multiple S x Pixel coordinates (x ') of target point of each traffic cone instance on traffic cone feature map' g ,y' g ) Ordinate value y 'of (1)' g Multiplied by a sampling multiple S of the ordinate y Obtaining the pixel coordinates (x ') of the target point of each traffic cone line example on the road image to be detected' c ,y' c ). It can be understood that the size of the labeled road image input into the preset convolutional neural network during training is consistent with the size of the road image to be detected input into the preset detection model during prediction.
And finally, calculating the pixel coordinates of the sampling points except the target point in the traffic cone line example on the road image to be detected according to the offset predicted values of the sampling points and the pixel coordinates of the target point of the traffic cone line example on the traffic cone line characteristic graph scale to obtain a traffic cone line sequence. In particular, for each traffic coneIn the line example, if the calculated sampling point offset is directly adopted as the network learning target in the process of training the preset convolutional neural network, the sampling point offset predicted value (O ') output by the sampling point offset detection head is directly subjected to' xi ,O' yi ) And pixel coordinates (x ') of corresponding target point on traffic cone line feature map scale' g ,y' g ) Summing to obtain pixel coordinates (x ') of the sampling points except the target point in each traffic cone line example on the traffic cone line characteristic diagram scale' i ,y' i ) I.e. x' i =O' xi +x' g ,y' i =O' yi +y' g . If the normalized sampling point offset is adopted as the network learning target in the process of training the preset convolutional neural network, firstly, the sampling point offset predicted value (O ') output by the sampling point offset detection head is subjected to' xi ,O' yi ) Performing normalization inverse operation to obtain a sampling point offset predicted value before normalization, and then performing pixel coordinate (x ') of the sampling point offset predicted value before normalization and the corresponding target point on the traffic cone line characteristic diagram scale' g ,y' g ) Summing to obtain pixel coordinates (x ') of sampling points except the target point on the traffic cone characteristic map in each traffic cone example' i ,y' i ). If the sampling point offset is normalized by the width and the height of the input image during training, and during prediction, performing normalization inverse operation on a sampling point offset prediction value based on the width and the height of the input image; and if training, normalizing the offset of the sampling point by adopting the length of the traffic cone line, and during prediction, performing normalization inverse operation on the offset prediction value of the sampling point by the length of the traffic cone line acquired based on the length prediction value of the traffic cone line.
Pixel coordinates (x ') of sampling points except the target point in each traffic cone line example on the traffic cone line characteristic map' i ,y' i ) X 'of abscissa value' i Multiplied by the abscissa sampling multiple S x Pixel coordinates (x ') of sampling points except the target point in each traffic cone line example on the traffic cone line characteristic diagram' i ,y' i ) Ordinate value y 'of (1)' i Multiplied by the ordinate sampling multiple S y Thereby obtaining respective intersectionsAnd (4) obtaining the traffic cone line sequence by the pixel coordinates of the sampling points except the target point on the input road image to be detected in the traffic cone line example. When the traffic cone line sequences are visualized, the points in each traffic cone line sequence are connected in sequence to obtain the line representation corresponding to each traffic cone line sequence.
End-to-end traffic cone line detection is carried out on the road image to be detected through a preset detection model, the detection process is simple, and the detection speed is high; after the traffic cone detection result is obtained, the traffic cone detection result is simply analyzed, the example-level traffic cone representation can be rapidly recovered, a complex post-processing process is not needed, the consumed time is short, and the real-time performance is high.
Further, after the traffic cone-line sequence is obtained, the traffic cone-line sequence can be further processed to screen out a low-quality traffic cone-line sequence, and specifically, the traffic cone-line sequence can be screened based on the distance between two traffic cone-lines.
Specifically, first, the distance between each traffic cone in the traffic cone sequence is calculated.
In one embodiment, the horizontal distance between the traffic cones may be calculated according to coordinates of each point in the traffic cone sequence, and the horizontal distance between the traffic cones is taken as the distance between the traffic cones. Suppose that the traffic cone a corresponds to a traffic cone sequence of
Figure BDA0003670373330000191
The traffic cone line sequence corresponding to the traffic cone line b is
Figure BDA0003670373330000192
Wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003670373330000193
for traffic cone line sequence I a The (ii) th sampling point in (d),
Figure BDA0003670373330000194
for traffic cone line sequence I b The ith sample point of (1) e For traffic cone line sequence I a End point index of b e For traffic cone line sequence I b End point index of a s For traffic cone line sequence I a Index of starting point of (a), b s For traffic cone line sequence I b The horizontal distance D between the traffic cone a and the traffic cone b can be calculated by the following formula:
Figure BDA0003670373330000195
wherein e is min (a) e ,b e ),s=max(a s ,b s ) Min () is the take minimum function and max () is the take maximum function.
In another embodiment, the included angle between each traffic cone and the horizontal line can be fitted according to the traffic cone sequence; and calculating the distance weight coefficient between the traffic cones through the included angle between the traffic cones and the horizontal line. Supposing that the traffic cone line sequence I corresponding to the traffic cone line a is fitted a At an angle theta to the horizontal a Fitting the traffic conic line sequence I corresponding to the traffic conic line b b At an angle theta to the horizontal b Distance weight coefficient w between traffic cone a and traffic cone b a,b =sin(max(θ ab ))/cos(|θ ab I/2). And weighting the horizontal distance between the traffic cones through the distance weight coefficient between the traffic cones to obtain the distance between the traffic cones. It can be understood that the included angle between each point in each traffic cone sequence and the horizontal line may also be fitted, the distance weight coefficients of the two points are calculated according to the included angles between the two points in the two traffic cone sequences and the horizontal line, the horizontal distance between the two points (i.e. the horizontal coordinate difference value of the two points) is weighted according to the distance weight coefficients of the two points, so as to obtain the horizontal weighted distance, and finally the horizontal weighted distances of all the points in the two traffic cone sequences are averaged, so as to obtain the distance between the two traffic cone sequences.
In the embodiment of the application, the included angle between each traffic cone line sequence and the horizontal line is fitted, the distance weight coefficient is calculated through the included angles between the two traffic cone line sequences and the horizontal line, and the horizontal distance between the two traffic cone line sequences is weighted through the distance weight coefficient, so that the distance between the traffic cone line sequences is calculated to be irrelevant to the angle between the two traffic cone line sequences, and the distance measurement result between the traffic cone lines is prevented from being influenced by the angle relation.
When the distance between the two traffic cone lines is smaller than a preset distance threshold value, eliminating the traffic cone line sequence with smaller traffic cone line category probability to obtain a final traffic cone line sequence, and regarding the eliminated traffic cone line sequence as a low-quality traffic cone line sequence; when the distance between the two traffic cone lines is larger than or equal to the preset distance threshold, the traffic cone line sequences corresponding to the two traffic cone lines are reserved, and therefore the final traffic cone line sequence is obtained. The preset distance threshold may be specifically set according to an actual situation, and is not specifically limited herein.
After the final traffic cone line sequences are obtained, when the final traffic cone line sequences need to be visualized, the points in each final traffic cone line sequence are connected in sequence, and line representations corresponding to each final traffic cone line sequence are obtained. In addition, in an actual application scenario of automatic driving, coordinates of each point in a final traffic cone sequence need to be converted from two dimensions to three dimensions, so that the traffic cone sequence under a physical coordinate system is obtained. Converting the coordinates of each point in the final traffic cone line sequence from two dimensions to three dimensions belongs to the prior art, and is not described herein again.
In the embodiment of the application, traffic cone lines are drawn by taking the intersection points of the traffic cones and the ground in the road image to be marked as marking points, namely, the grouped traffic cones are marked as an independent example, so that the high-order semantic consistency of the connection relation of the traffic cones is guaranteed, and the specific connection relation representation of the traffic cones is obtained; and then, traffic cone line detection is carried out on the road image to be detected based on the marked road image and the traffic cone line ordered point set obtained by marking, the grouped traffic cone lines can be integrally detected as an example, fluctuation is not easy to generate, the consistency, the continuity and the robustness of the traffic cone connection result can be effectively improved, and the technical problems that in the prior art, the related traffic cones are connected in groups based on preset rules, the consistency result is difficult to obtain, the connection result is easy to generate fluctuation, and the robustness is low are solved.
The above is an embodiment of a traffic cone detection method provided by the present application, and the following is an embodiment of a traffic cone detection device provided by the present application.
Referring to fig. 4, an embodiment of the present application provides a traffic cone detection device, including:
the system comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is used for acquiring a road image to be marked, and the road image to be marked comprises at least one group of traffic cones;
the marking unit is used for drawing traffic cone lines on the road image to be marked based on the marking points to obtain a marked road image and generate a traffic cone line ordered point set of the traffic cone lines;
and the detection unit is used for processing the road image to be detected based on the marked road image and the traffic cone line ordered point set to obtain a traffic cone line detection result.
As a further improvement, the detection unit is specifically configured to:
calculating the target offset of each traffic cone line through the marked road image and the traffic cone line ordered point set;
training a preset convolution neural network by taking the marked road image as input data and taking the traffic cone line type, the target offset and the traffic cone line length as learning targets to obtain a preset detection model, wherein the traffic cone line type and the traffic cone line length are obtained by marking;
and carrying out traffic cone line detection on the road image to be detected through a preset detection model to obtain a traffic cone line detection result.
As a further improvement, the traffic cone detection device in the embodiment of the present application further includes: a screening unit for:
calculating the distance between each traffic cone in the traffic cone sequence;
and when the distance between the two traffic cone lines is smaller than a preset distance threshold value, eliminating the traffic cone line sequence with the lower traffic cone line category probability to obtain the final traffic cone line sequence.
As a further improvement, the detection unit is specifically configured to:
expanding the connecting line of the marking points in the traffic cone line ordered point set to a segmentation marking with preset width through pixels to obtain a target label of the marked road image;
taking the marked road image as input data and the target label as a network learning target to train a preset convolutional neural network to obtain a preset segmentation model;
carrying out traffic cone line segmentation on the road image to be detected through a preset segmentation model, and outputting a traffic cone line segmentation result;
clustering traffic cone line pixel points in the traffic cone line segmentation result to obtain a plurality of traffic cone line examples;
and fitting the traffic cone line pixel points in each traffic cone line example to obtain the detected traffic cone lines.
As a further improvement, the method further comprises the following steps: a pre-processing unit to:
judging whether the quantity of the marked points in the traffic cone line ordered point set is equal to the quantity of the preset sampling points or not;
if not, sequentially connecting two adjacent marking points in the traffic cone line ordered point set to obtain a line segment vector group;
calculating the curvature radius of the circular arc corresponding to two adjacent line segment vectors in the line segment vector group;
and processing the traffic cone line ordered point set according to the curvature radius, so that the final number of the marked points of the traffic cone line ordered point set is equal to the number of the preset sampling points.
In the embodiment of the application, traffic cone lines are drawn by taking the intersection point of each traffic cone and the ground in the road image to be labeled as a labeling point, namely, the grouped traffic cones are labeled as an independent example, which is helpful for ensuring the high-order semantic consistency of the connection relation of the traffic cones, so that the specific connection relation representation of the traffic cones is obtained; and then, traffic cone line detection is carried out on the road image to be detected based on the marked road image and the traffic cone line ordered point set obtained by marking, the grouped traffic cone lines can be integrally detected as an example, fluctuation is not easy to generate, the consistency, the continuity and the robustness of the traffic cone connection result can be effectively improved, and the technical problems that in the prior art, the related traffic cones are connected in groups based on preset rules, the consistency result is difficult to obtain, the connection result is easy to generate fluctuation, and the robustness is low are solved.
Referring to fig. 5, an embodiment of the present application further provides a traffic cone detection device, where the device includes a processor and a memory;
the memory is used for storing the program codes and transmitting the program codes to the processor;
the processor is used for executing the traffic cone detection method in the foregoing method embodiment according to instructions in the program code.
The embodiment of the present application further provides a computer-readable storage medium, which is used for storing program codes, and the program codes, when executed by a processor, implement the traffic cone detection method in the foregoing method embodiments.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The terms "first," "second," "third," "fourth," and the like in the description of the application and the above-described figures, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged under appropriate circumstances such that the embodiments of the application described herein may be implemented, for example, in sequences other than those illustrated or described herein. Moreover, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" for describing an association relationship of associated objects, indicating that there may be three relationships, e.g., "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of single item(s) or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for executing all or part of the steps of the method described in the embodiments of the present application through a computer device (which may be a personal computer, a server, or a network device). And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (11)

1. A traffic cone detection method is characterized by comprising the following steps:
acquiring a road image to be marked, wherein the road image to be marked comprises at least one group of traffic cones;
taking the intersection point of each traffic cone and the ground in the road image to be marked as a marking point, drawing a traffic cone line on the road image to be marked based on the marking point to obtain a marked road image, and generating a traffic cone line ordered point set of the traffic cone line;
and processing the road image to be detected based on the marked road image and the traffic cone line ordered point set to obtain a traffic cone line detection result.
2. The traffic cone detection method according to claim 1, wherein the processing the road image to be detected based on the labeled road image and the traffic cone ordered point set to obtain a traffic cone detection result comprises:
calculating the target offset of each traffic cone line through the marked road image and the traffic cone line ordered point set;
training a preset convolutional neural network by taking the marked road image as input data and taking the traffic cone line type, the target offset and the traffic cone line length as learning targets to obtain a preset detection model, wherein the traffic cone line type and the traffic cone line length are obtained by marking;
and carrying out traffic cone line detection on the road image to be detected through the preset detection model to obtain a traffic cone line detection result.
3. The traffic cone detection method according to claim 2, wherein the calculating the target offset of each traffic cone from the labeled road image and the ordered set of traffic cone points comprises:
calculating pixel coordinates of the traffic cone line characteristic diagram projected by the target point according to the size of the marked road image and the size of the traffic cone line characteristic diagram extracted by the preset convolution neural network by taking a point which is half of the accumulated length of each traffic cone line in the traffic cone line ordered point set as the target point;
calculating the target point offset of each traffic cone line according to the pixel coordinates of the target point of each traffic cone line in the traffic cone line characteristic diagram;
calculating the offset of the sampling point except the target point in each traffic cone line in the traffic cone line ordered point set and the corresponding target point to obtain the offset of the sampling point;
and taking the target point offset and the sampling point offset of each traffic cone as target offsets, or normalizing the sampling point offsets of each traffic cone, and taking the target point offset and the normalized sampling point offset of each traffic cone as target offsets.
4. The traffic cone detection method according to claim 2, wherein the detecting the traffic cone of the road image to be detected by the preset detection model to obtain the traffic cone detection result comprises:
carrying out traffic cone line detection on the road image to be detected through the preset detection model, and outputting traffic cone line existence probability, traffic cone line category probability, target offset predicted value and traffic cone line length predicted value of each pixel point in a traffic cone line characteristic diagram extracted by the preset detection model;
extracting target pixel points of which the existence probability of the traffic cone lines is higher than an existence probability threshold value from a traffic cone line characteristic diagram output by the preset detection model;
determining the traffic cone line category of the target pixel point according to the traffic cone line category probability of the target pixel point, and extracting a traffic cone line example with the traffic cone line category probability higher than a category probability threshold value from the target pixel point;
acquiring the length of the traffic cone line instance according to the predicted traffic cone line length value;
and acquiring the pixel coordinates of each point in the traffic cone line example in the image to be detected according to the target deviation predicted value to obtain a traffic cone line sequence.
5. The traffic cone detection method of claim 4, further comprising:
calculating the distance between each traffic cone in the traffic cone sequence;
and when the distance between the two traffic cone lines is smaller than a preset distance threshold value, eliminating the traffic cone line sequence with the lower traffic cone line category probability to obtain a final traffic cone line sequence.
6. The traffic cone detection method according to claim 1, wherein the processing the road image to be detected based on the labeled road image and the traffic cone ordered point set to obtain a traffic cone detection result comprises:
expanding the connecting line of the marking points in the traffic cone line ordered point set to a segmentation marking with a preset width through pixels to obtain a target label of the marked road image;
taking the marked road image as input data, and taking the target label as a network learning target to train a preset convolutional neural network to obtain a preset segmentation model;
carrying out traffic cone line segmentation on the road image to be detected through the preset segmentation model, and outputting a traffic cone line segmentation result;
clustering traffic cone line pixel points in the traffic cone line segmentation result to obtain a plurality of traffic cone line examples;
and fitting the traffic cone line pixel points in each traffic cone line example to obtain the detected traffic cone lines.
7. The traffic cone detection method according to claim 1, wherein the processing of the road image to be detected based on the labeled road image and the traffic cone ordered point set to obtain a traffic cone detection result further comprises:
judging whether the number of the marked points in the traffic cone line ordered point set is equal to the number of preset sampling points or not;
if not, sequentially connecting two adjacent marking points in the traffic cone line ordered point set to obtain a line segment vector group;
calculating the curvature radius of the circular arc corresponding to two adjacent line segment vectors in the line segment vector group;
and processing the traffic cone line ordered point set according to the curvature radius, so that the final number of the marking points of the traffic cone line ordered point set is equal to the number of the preset sampling points.
8. The traffic cone detection method according to claim 7, wherein the processing the ordered set of traffic cones according to the curvature radius so that the number of final labeled points of the ordered set of traffic cones is equal to the number of preset sampling points comprises:
when the number of the marking points of the traffic cone line ordered point set is smaller than the number of the preset sampling points, calculating a first number and a second number according to the difference value between the number of the preset sampling points and the number of the marking points of the traffic cone line ordered point set and the number of the line segment vectors;
sampling a second quantity of sampling points on the previous line segment vector in the adjacent two line segment vectors corresponding to the first quantity of minimum curvature radii, and sampling the second quantity of sampling points on the rest line segment vectors, so that the final marking point quantity of the traffic cone line ordered point set is equal to the preset sampling point quantity, and the final marking points comprise original marking points and newly added sampling points;
when the number of the marking points of the traffic cone line ordered point set is greater than the number of the preset sampling points, calculating the difference value between the number of the marking points of the traffic cone line ordered point set and the number of the preset sampling points to obtain a third number;
and deleting the line segment terminal point of the previous line segment vector in the adjacent two line segment vectors corresponding to the maximum curvature radius of the third number, so that the final number of the marking points of the traffic cone line ordered point set is equal to the number of the preset sampling points.
9. A traffic cone detection device, comprising:
the system comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is used for acquiring a road image to be marked, and the road image to be marked comprises at least one group of traffic cones;
the marking unit is used for taking the intersection point of each traffic cone in the road image to be marked and the ground as a marking point, drawing a traffic cone line on the road image to be marked based on the marking point to obtain a marked road image and generating a traffic cone line ordered point set of the traffic cone line;
and the detection unit is used for processing the road image to be detected based on the marked road image and the traffic cone line ordered point set to obtain a traffic cone line detection result.
10. A traffic cone detection device, comprising a processor and a memory;
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to execute the traffic cone detection method according to any one of claims 1-8 according to instructions in the program code.
11. A computer-readable storage medium for storing program code, which when executed by a processor implements the traffic cone detection method according to any one of claims 1 to 8.
CN202210602966.7A 2022-05-30 2022-05-30 Traffic cone line detection method, device, equipment and storage medium Pending CN115050003A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210602966.7A CN115050003A (en) 2022-05-30 2022-05-30 Traffic cone line detection method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210602966.7A CN115050003A (en) 2022-05-30 2022-05-30 Traffic cone line detection method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115050003A true CN115050003A (en) 2022-09-13

Family

ID=83159810

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210602966.7A Pending CN115050003A (en) 2022-05-30 2022-05-30 Traffic cone line detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115050003A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110427369A (en) * 2019-07-17 2019-11-08 广州市高速公路有限公司营运分公司 Road, which encloses, covers monitoring method and device
CN111460984A (en) * 2020-03-30 2020-07-28 华南理工大学 Global lane line detection method based on key point and gradient balance loss
CN113011338A (en) * 2021-03-19 2021-06-22 华南理工大学 Lane line detection method and system
CN113968229A (en) * 2021-11-30 2022-01-25 广州文远知行科技有限公司 Road area determination method and device and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110427369A (en) * 2019-07-17 2019-11-08 广州市高速公路有限公司营运分公司 Road, which encloses, covers monitoring method and device
CN111460984A (en) * 2020-03-30 2020-07-28 华南理工大学 Global lane line detection method based on key point and gradient balance loss
CN113011338A (en) * 2021-03-19 2021-06-22 华南理工大学 Lane line detection method and system
CN113968229A (en) * 2021-11-30 2022-01-25 广州文远知行科技有限公司 Road area determination method and device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
史林波: ""自动驾驶中路锥车道的识别方法及其路径规划研究"", 《中国优秀硕士学位论文全文数据库 工程科技II辑》, 15 July 2020 (2020-07-15), pages 035 - 296 *

Similar Documents

Publication Publication Date Title
JP2019153281A (en) Method, device and equipment for determining traffic lane line on road
CN108182433B (en) Meter reading identification method and system
CN108985380B (en) Point switch fault identification method based on cluster integration
CN108279016B (en) Smoothing processing method and device for HAD map, navigation system and automatic driving system
CN111784017B (en) Road traffic accident number prediction method based on road condition factor regression analysis
CN111144325A (en) Fault identification and positioning method, device and equipment for power equipment of transformer substation
CN110444011B (en) Traffic flow peak identification method and device, electronic equipment and storage medium
CN104820673B (en) Time Series Similarity measure based on adaptivity segmentation statistical approximation
CN110889399B (en) High-resolution remote sensing image weak and small target detection method based on deep learning
CN110798805B (en) Data processing method and device based on GPS track and storage medium
CN112766113B (en) Intersection detection method, device, equipment and storage medium
CN109740609A (en) A kind of gauge detection method and device
CN108241819A (en) The recognition methods of pavement markers and device
CN110704652A (en) Vehicle image fine-grained retrieval method and device based on multiple attention mechanism
CN105912977A (en) Lane line detection method based on point clustering
CN113406623A (en) Target identification method, device and medium based on radar high-resolution range profile
CN115081505A (en) Pedestrian network incremental generation method based on walking track data
CN114202123A (en) Service data prediction method and device, electronic equipment and storage medium
CN115050003A (en) Traffic cone line detection method, device, equipment and storage medium
CN106296747A (en) Robust multi-model approximating method based on structure decision diagram
CN103064857A (en) Image query method and image query equipment
CN115658710A (en) Map updating processing method and device, electronic equipment and storage medium
CN114821502A (en) Pavement mark detection method, device, equipment and storage medium
CN114973300A (en) Component type identification method and device, electronic equipment and storage medium
KR101419334B1 (en) Apparatus for extracting object from 3D data and method of the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination