CN113780067A - Lane linear marker detection method and system based on semantic segmentation - Google Patents

Lane linear marker detection method and system based on semantic segmentation Download PDF

Info

Publication number
CN113780067A
CN113780067A CN202110868082.1A CN202110868082A CN113780067A CN 113780067 A CN113780067 A CN 113780067A CN 202110868082 A CN202110868082 A CN 202110868082A CN 113780067 A CN113780067 A CN 113780067A
Authority
CN
China
Prior art keywords
lane
marker
linear
semantic segmentation
linear marker
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110868082.1A
Other languages
Chinese (zh)
Inventor
万齐斌
何云
刘奋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Heading Data Intelligence Co Ltd
Original Assignee
Heading Data Intelligence Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Heading Data Intelligence Co Ltd filed Critical Heading Data Intelligence Co Ltd
Priority to CN202110868082.1A priority Critical patent/CN113780067A/en
Publication of CN113780067A publication Critical patent/CN113780067A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method and a system for detecting lane linear markers based on semantic segmentation, which comprises the following steps: s1, training a semantic segmentation model, inputting an original image, outputting a mask image, and extracting a binary image of the original image according to a gray value label of the lane linear marker; the lane linear marker comprises a transverse linear marker and a longitudinal linear marker; s2, extracting the contour of each lane linear marker in the binary image, and storing contour points; s3, filling different lane linear markers with different gray values according to the contour points, and counting a pixel coordinate set of each target; and S4, extracting the central axis of each lane linear marker according to the pixel coordinate set of each lane linear marker target, and outputting the coordinates of the front point and the rear point of the central axis as a vectorization result. The recognition precision of the lane linear marker is improved based on the semantic segmentation technology.

Description

Lane linear marker detection method and system based on semantic segmentation
Technical Field
The invention relates to the technical field of automatic driving and high-precision map making, in particular to a method and a system for detecting lane linear markers based on semantic segmentation.
Background
In the fields of automatic driving and high-precision map making, real-time detection of targets is one of very important indexes, at present, most of the targets capable of being detected in real time are yolo series first-order target detection networks, and the actual detection effect of the networks on special linear markers (such as linear targets like stop lines and vertical rods on two sides of lanes) on some lanes is not ideal.
Disclosure of Invention
The invention provides a method and a system for detecting lane linear markers based on semantic segmentation aiming at the technical problems in the prior art, and the method and the system can improve the recognition precision of the lane linear markers based on the semantic segmentation technology.
The technical scheme for solving the technical problems is as follows:
in a first aspect, the invention provides a method for detecting lane linear markers based on semantic segmentation, which comprises the following steps:
s1, training a semantic segmentation model, inputting an original image, outputting a mask image, and extracting a binary image of the original image according to a gray value label of the lane linear marker; the lane linear marker comprises a transverse linear marker and a longitudinal linear marker;
s2, extracting the contour of each lane linear marker in the binary image, and storing contour points;
s3, filling different lane linear markers with different gray values according to the contour points, and counting a pixel coordinate set of each target;
and S4, extracting the central axis of each lane linear marker according to the pixel coordinate set of each lane linear marker target, and outputting the coordinates of the front point and the rear point of the central axis as a vectorization result.
Further, the step S4 further includes: and connecting and combining two or more central axes which are positioned on the same straight line and the connected lengths of which do not exceed a preset threshold value, and taking the coordinates of the front point and the rear point of the combined central axis as vectorization results to output.
Further, step S1 includes:
initializing a semantic segmentation model;
marking lane linear markers in the training data set;
training and parameter adjusting the semantic segmentation model for multiple times by using the labeled training data set until the semantic segmentation model meets the requirements;
the original image is used as the input of the trained semantic segmentation model to obtain the predicted value of each pixel point of the original image, so that a mask image corresponding to the original image is output;
and extracting a binary image of the lane linear marker from the mask image according to the gray value label of the lane linear marker.
Further, in step S2, the outline of each lane linear marker in the binarized map is extracted by using the OpenCV tool, and the outline points are saved.
Further, step S4 includes:
dividing the pixel coordinate set of each lane linear marker target into a plurality of subsets according to whether the abscissa or the ordinate is the same;
averaging the ordinate in each subset aiming at the subsets which are equally divided according to the abscissa, wherein each abscissa x1 corresponds to one ordinate average y1 ', a plurality of (x1, y 1') are connected to form a central axis of the lane linear marker, and two endpoint coordinates of the central axis are taken as vectorization results of the transverse linear marker to be output;
and (3) averaging the abscissa in each subset aiming at the subsets which are equally divided according to the ordinate, wherein each ordinate y2 corresponds to one abscissa average value x2 ', a plurality of (x 2', y2) are connected into the central axis of the lane linear marker, and two endpoint coordinates of the central axis are taken as vectorization results of the longitudinal linear marker to be output.
Further, step S4 further includes:
and taking the coordinates of the two end points of the central axis as the middle points of two opposite side lines of the rectangular frame, and fitting the central axis into a rectangular frame wrapping the lane linear marker as a vectorization result to be output.
In a second aspect, the present invention further provides a system for detecting lane linear markers based on semantic segmentation, including:
the semantic segmentation module is used for training a semantic segmentation model, inputting an original image, outputting a mask image and extracting a binary image of the original image according to a gray value label of the lane linear marker; the lane linear marker comprises a transverse linear marker and a longitudinal linear marker;
the contour extraction module is used for extracting the contour of each lane linear marker in the binary image and storing contour points;
the statistical module is used for filling different gray values into different lane linear markers according to the contour points and counting the pixel coordinate set of each target;
and the central axis extraction module is used for extracting the central axis of the lane linear marker according to the pixel coordinate set of each lane linear marker target, and taking the coordinates of the front point and the rear point of the central axis as vectorization results to output.
Further, the central axis extraction module includes a merging and connecting module, and is configured to connect and merge two or more central axes that are located on the same straight line and have lengths that do not exceed a preset threshold after connection, and output a vectorization result by taking coordinates of two points in front of and behind the merged central axis.
In a third aspect, the present invention also provides an electronic device comprising,
a memory for storing a computer software program;
and the processor is used for reading and executing the computer software program stored in the memory, so as to further realize the lane linear marker detection method based on semantic segmentation in the first aspect of the invention.
In a fourth aspect, the present invention further provides a non-transitory computer-readable storage medium, wherein a computer software program for implementing the semantic segmentation based lane marker detection method according to the first aspect of the present invention is stored in the storage medium.
The invention has the beneficial effects that: in order to achieve finer recognition, a semantic segmentation technology is adopted for pixel-by-pixel recognition, two end points of each lane linear marker are obtained through processing, the two end points are a new vector expression form, meanwhile, vector expression in a rectangular frame form is considered, and a finer rectangular frame is fitted.
Drawings
FIG. 1 is a schematic flow chart of a method for detecting lane linear markers based on semantic segmentation according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a system for detecting lane markers based on semantic segmentation according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to the present invention;
fig. 4 is a schematic structural diagram of a computer-readable storage medium according to the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
As shown in fig. 1, an embodiment of the present invention provides a method for detecting lane linear markers based on semantic segmentation, including the following steps:
s1, training a semantic segmentation model, inputting an original image, outputting a mask image, and extracting a binary image of the original image according to a gray value label of the lane linear marker; the lane markers include a transverse marker and a longitudinal marker.
Specifically, S1 includes the following:
initializing a semantic segmentation model;
marking lane linear markers in the training data set, setting pixel values of different objects contained in the data set to be different gray values, and setting a background gray value to be 0; for example, the gradation value of the lateral line marker in the lane line markers may be set to 1, the gradation value of the longitudinal line marker may be set to 2, the gradation value of the lane guardrail may be set to 3, and so on.
Training and parameter adjusting the semantic segmentation model for multiple times by using the labeled training data set until the semantic segmentation model meets the requirements;
the original image is used as the input of the trained semantic segmentation model to obtain the predicted value of each pixel point of the original image, so that a mask image corresponding to the original image is output;
and extracting a binary image of the lane linear marker from the mask image according to the gray value label of the lane linear marker.
It should be noted that the transverse linear markers referred to in the present embodiment include, but are not limited to, lane stop lines, and the longitudinal linear markers include, but are not limited to, posts of lighting devices on both sides of a lane, posts of traffic signs.
And S2, extracting the contour of each lane linear marker in the binary image by adopting an OpenCV tool, and storing contour points.
And S3, filling different lane linear markers with different gray values according to the contour points to distinguish the different lane linear markers, and then counting the pixel coordinate set of each target.
And S4, extracting the central axis of each lane linear marker according to the pixel coordinate set of each lane linear marker target, and outputting the coordinates of the front point and the rear point of the central axis as a vectorization result.
Specifically, step S4 includes:
1) and dividing the pixel coordinate set of each lane linear marker target into a plurality of subsets according to whether the abscissa or the ordinate is the same.
2) And (3) averaging the ordinate in each subset aiming at the subsets which are equally divided according to the abscissa, wherein each abscissa x1 corresponds to one ordinate average y1 ', a plurality of (x1, y 1') are connected into the central axis of the lane linear marker, and two endpoint coordinates of the central axis are taken as vectorization results of the transverse linear marker to be output.
For example, for the stop line L1 on the lane, the pixel coordinate set after the statistics of step S3 is as follows:
Figure BDA0003187987040000061
the set of pixel coordinates of L1 is divided into 5 subsets in terms of abscissa identity as follows:
Figure BDA0003187987040000062
after the ordinate in each subset is averaged, the coordinates formed by the abscissa are (1,2), (2,2), (3,2), (4,2), (5,2), respectively, then the lines of these coordinates are connected to form the central axis of the stop line L1, and then the coordinates of the two end points of the central axis, i.e., { (1,2), (5,2) }, are taken as the vectorization result of the stop line L1.
In the actual detection process, it is found that a complete stop line is often divided into two or more segments by a passing vehicle, and the length of a stop line in an image is often not more than 5 pixels, and then the stop line divided into multiple segments by a vehicle in 5 pixels needs to be merged and connected. Both end points of the merged stop line L1 are output as the vectorization result of the stop line L1.
3) And (3) averaging the abscissa in each subset aiming at the subsets which are equally divided according to the ordinate, wherein each ordinate y2 corresponds to one abscissa average value x2 ', a plurality of (x 2', y2) are connected into the central axis of the lane linear marker, and two endpoint coordinates of the central axis are taken as vectorization results of the longitudinal linear marker to be output.
For example, for the vertical rod L2, the pixel coordinate set after the statistics of step S3 is as follows:
Figure BDA0003187987040000063
the set of pixel coordinates of L2 is divided into 5 subsets in terms of ordinate identity, as follows:
{(1,1)(2,1)(3,1)},{(1,2)(2,2)(3,2)},{(1,3)(2,3)(3,3)},{(1,4)(2,4)(3,4)},{(1,5)(2,5)(3,5)}
after the abscissa is averaged in each subset, the coordinates formed by the ordinate are (2,1), (2,2), (2,3), (2,4) and (2,5), respectively, then the coordinates are connected to form the central axis of the vertical rod L2, and then two end point coordinates of the central axis, namely { (2,1), (2,5) }, are taken as the vectorization result of the vertical rod L2.
In the actual detection process, a complete vertical rod is often blocked by past signboards or roadside green plants and is divided into multiple sections, the length of the vertical rod in an image is often not more than 5 pixels, and the vertical rods divided into the multiple sections in the 5 pixels need to be merged and connected. The two end points of the combined vertical rod L2 are output as the vectorization result of the vertical rod L2.
In another embodiment, step S4 further includes:
and taking the coordinates of the two end points of the central axis as the middle points of two opposite side lines of the rectangular frame, and fitting the central axis into a rectangular frame wrapping the lane linear marker as a vectorization result to be output.
The embodiment of the present invention is based on the above method, and further provides a system for detecting lane linear markers based on semantic segmentation, which has a structure as shown in fig. 2, and includes:
the semantic segmentation module is used for training a semantic segmentation model, inputting an original image, outputting a mask image and extracting a binary image of the original image according to a gray value label of the lane linear marker; the lane linear marker comprises a transverse linear marker and a longitudinal linear marker;
the contour extraction module is used for extracting the contour of each lane linear marker in the binary image and storing contour points;
the statistical module is used for filling different gray values into different lane linear markers according to the contour points and counting the pixel coordinate set of each target;
and the central axis extraction module is used for extracting the central axis of the lane linear marker according to the pixel coordinate set of each lane linear marker target, and taking the coordinates of the front point and the rear point of the central axis as vectorization results to output.
Further, the central axis extraction module includes a merging and connecting module, and is configured to connect and merge two or more central axes that are located on the same straight line and have lengths that do not exceed a preset threshold after connection, and output a vectorization result by taking coordinates of two points in front of and behind the merged central axis.
Fig. 3 is a schematic diagram of an embodiment of an electronic device according to an embodiment of the present invention. As shown in fig. 3, an embodiment of the present invention provides an electronic device, which includes a memory 510, a processor 520, and a computer program 511 stored in the memory 520 and executable on the processor 520, wherein the processor 520 executes the computer program 511 to implement the following steps:
s1, training a semantic segmentation model, inputting an original image, outputting a mask image, and extracting a binary image of the original image according to a gray value label of the lane linear marker; the lane markers include a transverse marker and a longitudinal marker.
And S2, extracting the contour of each lane linear marker in the binary image by adopting an OpenCV tool, and storing contour points.
And S3, filling different lane linear markers with different gray values according to the contour points to distinguish the different lane linear markers, and then counting the pixel coordinate set of each target.
And S4, extracting the central axis of each lane linear marker according to the pixel coordinate set of each lane linear marker target, and outputting the coordinates of the front point and the rear point of the central axis as a vectorization result.
Fig. 4 is a schematic diagram of an embodiment of a computer-readable storage medium according to an embodiment of the present invention. As shown in fig. 4, the present embodiment provides a computer-readable storage medium 600 having a computer program 611 stored thereon, the computer program 611, when executed by a processor, implementing the steps of:
s1, training a semantic segmentation model, inputting an original image, outputting a mask image, and extracting a binary image of the original image according to a gray value label of the lane linear marker; the lane markers include a transverse marker and a longitudinal marker.
And S2, extracting the contour of each lane linear marker in the binary image by adopting an OpenCV tool, and storing contour points.
And S3, filling different lane linear markers with different gray values according to the contour points to distinguish the different lane linear markers, and then counting the pixel coordinate set of each target.
And S4, extracting the central axis of each lane linear marker according to the pixel coordinate set of each lane linear marker target, and outputting the coordinates of the front point and the rear point of the central axis as a vectorization result.
It should be noted that, in the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to relevant descriptions of other embodiments for parts that are not described in detail in a certain embodiment.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A method for detecting lane linear markers based on semantic segmentation is characterized by comprising the following steps:
s1, training a semantic segmentation model, inputting an original image, outputting a mask image, and extracting a binary image of the original image according to a gray value label of the lane linear marker; the lane linear marker comprises a transverse linear marker and a longitudinal linear marker;
s2, extracting the contour of each lane linear marker in the binary image, and storing contour points;
s3, filling different lane linear markers with different gray values according to the contour points, and counting a pixel coordinate set of each target;
and S4, extracting the central axis of each lane linear marker according to the pixel coordinate set of each lane linear marker target, and outputting the coordinates of the front point and the rear point of the central axis as a vectorization result.
2. The method for detecting linear markers of lane based on semantic segmentation as claimed in claim 1, wherein the step S4 further comprises: and connecting and combining two or more central axes which are positioned on the same straight line and the connected lengths of which do not exceed a preset threshold value, and taking the coordinates of the front point and the rear point of the combined central axis as vectorization results to output.
3. The method for detecting linear markers of driveways based on semantic segmentation as claimed in claim 1, wherein step S1 comprises:
initializing a semantic segmentation model;
marking lane linear markers in the training data set;
training and parameter adjusting the semantic segmentation model for multiple times by using the labeled training data set until the semantic segmentation model meets the requirements;
the original image is used as the input of the trained semantic segmentation model to obtain the predicted value of each pixel point of the original image, so that a mask image corresponding to the original image is output;
and extracting a binary image of the lane linear marker from the mask image according to the gray value label of the lane linear marker.
4. The method for detecting linear lane markers based on semantic segmentation as claimed in claim 1, wherein in step S2, an OpenCV tool is used to extract the contour of each linear lane marker in the binarized graph and store the contour points.
5. The method for detecting linear markers of driveways based on semantic segmentation as claimed in claim 1, wherein step S4 comprises:
dividing the pixel coordinate set of each lane linear marker target into a plurality of subsets according to whether the abscissa or the ordinate is the same;
averaging the ordinate in each subset aiming at the subsets which are equally divided according to the abscissa, wherein each abscissa x1 corresponds to one ordinate average y1 ', a plurality of (x1, y 1') are connected to form a central axis of the lane linear marker, and two endpoint coordinates of the central axis are taken as vectorization results of the transverse linear marker to be output;
and (3) averaging the abscissa in each subset aiming at the subsets which are equally divided according to the ordinate, wherein each ordinate y2 corresponds to one abscissa average value x2 ', a plurality of (x 2', y2) are connected into the central axis of the lane linear marker, and two endpoint coordinates of the central axis are taken as vectorization results of the longitudinal linear marker to be output.
6. The method for detecting linear markers of lane based on semantic segmentation as claimed in claim 5, wherein step S4 further comprises:
and taking the coordinates of the two end points of the central axis as the middle points of two opposite side lines of the rectangular frame, and fitting the central axis into a rectangular frame wrapping the lane linear marker as a vectorization result to be output.
7. A semantic segmentation based lane marker detection system, comprising:
the semantic segmentation module is used for training a semantic segmentation model, inputting an original image, outputting a mask image and extracting a binary image of the original image according to a gray value label of the lane linear marker; the lane linear marker comprises a transverse linear marker and a longitudinal linear marker;
the contour extraction module is used for extracting the contour of each lane linear marker in the binary image and storing contour points;
the statistical module is used for filling different gray values into different lane linear markers according to the contour points and counting the pixel coordinate set of each target;
and the central axis extraction module is used for extracting the central axis of the lane linear marker according to the pixel coordinate set of each lane linear marker target, and taking the coordinates of the front point and the rear point of the central axis as vectorization results to output.
8. The system for detecting linear lane markers according to claim 7, wherein the central axis extraction module comprises a merging connection module, and is configured to connect and merge two or more central axes that are located on the same straight line and have lengths that do not exceed a preset threshold after connection, and output a vectorization result by taking coordinates of two front and rear points of the merged central axes.
9. An electronic device, comprising:
a memory for storing a computer software program;
a processor for reading and executing the computer software program stored in the memory, so as to implement the method for detecting the linear lane marker based on semantic segmentation as claimed in any one of claims 1 to 6.
10. A non-transitory computer-readable storage medium, wherein the storage medium stores therein a computer software program for implementing a semantic segmentation based lane marker detection method according to any one of claims 1 to 6.
CN202110868082.1A 2021-07-30 2021-07-30 Lane linear marker detection method and system based on semantic segmentation Pending CN113780067A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110868082.1A CN113780067A (en) 2021-07-30 2021-07-30 Lane linear marker detection method and system based on semantic segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110868082.1A CN113780067A (en) 2021-07-30 2021-07-30 Lane linear marker detection method and system based on semantic segmentation

Publications (1)

Publication Number Publication Date
CN113780067A true CN113780067A (en) 2021-12-10

Family

ID=78836539

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110868082.1A Pending CN113780067A (en) 2021-07-30 2021-07-30 Lane linear marker detection method and system based on semantic segmentation

Country Status (1)

Country Link
CN (1) CN113780067A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2007361004A1 (en) * 2007-11-16 2009-05-22 Tele Atlas B.V. Method of and apparatus for producing lane information
CN110458083A (en) * 2019-08-05 2019-11-15 武汉中海庭数据技术有限公司 A kind of lane line vectorization method, device and storage medium
WO2020048487A1 (en) * 2018-09-05 2020-03-12 北京嘀嘀无限科技发展有限公司 Image data processing method and system
CN111145248A (en) * 2018-11-06 2020-05-12 北京地平线机器人技术研发有限公司 Pose information determination method and device and electronic equipment
CN112434585A (en) * 2020-11-14 2021-03-02 武汉中海庭数据技术有限公司 Method, system, electronic device and storage medium for identifying virtual reality of lane line
CN112528917A (en) * 2020-12-18 2021-03-19 深兰科技(上海)有限公司 Zebra crossing region identification method and device, electronic equipment and storage medium
CN112862839A (en) * 2021-02-24 2021-05-28 清华大学 Method and system for enhancing robustness of semantic segmentation of map elements
CN113112480A (en) * 2021-04-16 2021-07-13 北京文安智能技术股份有限公司 Video scene change detection method, storage medium and electronic device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2007361004A1 (en) * 2007-11-16 2009-05-22 Tele Atlas B.V. Method of and apparatus for producing lane information
WO2020048487A1 (en) * 2018-09-05 2020-03-12 北京嘀嘀无限科技发展有限公司 Image data processing method and system
CN111145248A (en) * 2018-11-06 2020-05-12 北京地平线机器人技术研发有限公司 Pose information determination method and device and electronic equipment
CN110458083A (en) * 2019-08-05 2019-11-15 武汉中海庭数据技术有限公司 A kind of lane line vectorization method, device and storage medium
CN112434585A (en) * 2020-11-14 2021-03-02 武汉中海庭数据技术有限公司 Method, system, electronic device and storage medium for identifying virtual reality of lane line
CN112528917A (en) * 2020-12-18 2021-03-19 深兰科技(上海)有限公司 Zebra crossing region identification method and device, electronic equipment and storage medium
CN112862839A (en) * 2021-02-24 2021-05-28 清华大学 Method and system for enhancing robustness of semantic segmentation of map elements
CN113112480A (en) * 2021-04-16 2021-07-13 北京文安智能技术股份有限公司 Video scene change detection method, storage medium and electronic device

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
SANDIPANN P. NAROTE ET AL.: "A review of recent advances in lane detection and departure warning system", PATTERN RECOGNITION, vol. 73, pages 216 - 234 *
叶阳阳: "面向道路交通的环境感知关键技术研究", 中国博士学位论文全文数据库 (工程科技Ⅱ辑), no. 03, pages 035 - 10 *
张永宏等: "遥感图像道路提取方法综述", 计算机工程与应用, vol. 54, no. 13, pages 1 - 10 *
张浩;: "基于车道线宽度特征的车道线识别", 南方农机, vol. 51, no. 09, pages 46 - 49 *
杨帆: "数字图像处理与分析 第4版", 31 January 2019, 北京:北京航空航天大学出版社, pages: 225 - 226 *
钟棉卿: "基于移动激光雷达数据的路面状况检测方法研究", 中国博士学位论文全文数据库 (工程科技Ⅱ辑), no. 06, pages 034 - 11 *

Similar Documents

Publication Publication Date Title
CN111401371B (en) Text detection and identification method and system and computer equipment
US10133941B2 (en) Method, apparatus and device for detecting lane boundary
CN112528878A (en) Method and device for detecting lane line, terminal device and readable storage medium
CN104217427B (en) Lane line localization method in a kind of Traffic Surveillance Video
CN103093181B (en) A kind of method and apparatus of license plate image location
CN110472580B (en) Method, device and storage medium for detecting parking stall based on panoramic image
Romdhane et al. An improved traffic signs recognition and tracking method for driver assistance system
CN112200884B (en) Lane line generation method and device
Li et al. Inverse perspective mapping based urban road markings detection
CN111191611A (en) Deep learning-based traffic sign label identification method
CN114120289B (en) Method and system for identifying driving area and lane line
Das et al. Estimation of road boundary for intelligent vehicles based on deeplabv3+ architecture
CN113780070A (en) Pedestrian crossing early warning identification detection method and system
CN104966072A (en) Shape-based color-mark-free robotic fish pose identification algorithm
CN111914596A (en) Lane line detection method, device, system and storage medium
CN113780067A (en) Lane linear marker detection method and system based on semantic segmentation
CN113033363A (en) Vehicle dense target detection method based on deep learning
CN111563416A (en) Automatic steering method and system based on rice transplanter
CN116434160A (en) Expressway casting object detection method and device based on background model and tracking
CN112669346B (en) Pavement emergency determination method and device
CN114627319A (en) Target data reporting method and device, storage medium and electronic device
CN112215205B (en) Target identification method and device, computer equipment and storage medium
CN115018926A (en) Method, device and equipment for determining pitch angle of vehicle-mounted camera and storage medium
Sun et al. Multi-lane detection using CNNs and a novel region-grow algorithm
CN111260723A (en) Barycenter positioning method of bar and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination