CN110458023B - Training method of road line detection model, and road line detection method and device - Google Patents

Training method of road line detection model, and road line detection method and device Download PDF

Info

Publication number
CN110458023B
CN110458023B CN201910624027.0A CN201910624027A CN110458023B CN 110458023 B CN110458023 B CN 110458023B CN 201910624027 A CN201910624027 A CN 201910624027A CN 110458023 B CN110458023 B CN 110458023B
Authority
CN
China
Prior art keywords
road
line detection
road line
feature map
predicted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910624027.0A
Other languages
Chinese (zh)
Other versions
CN110458023A (en
Inventor
段翔宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Geely Holding Group Co Ltd
Zhejiang Geely Automobile Research Institute Co Ltd
Original Assignee
Zhejiang Geely Holding Group Co Ltd
Zhejiang Geely Automobile Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Geely Holding Group Co Ltd, Zhejiang Geely Automobile Research Institute Co Ltd filed Critical Zhejiang Geely Holding Group Co Ltd
Priority to CN201910624027.0A priority Critical patent/CN110458023B/en
Publication of CN110458023A publication Critical patent/CN110458023A/en
Application granted granted Critical
Publication of CN110458023B publication Critical patent/CN110458023B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a training method of a road line detection model, a road line detection method and a road line detection device, and relates to the field of image processing and intelligent driving. The training method of the road line detection model disclosed by the invention is adopted, continuous linear operators are adopted to extract the spatial position continuity of the characteristic graph and the linear form of the road line, the training is supervised to obtain the deep road line detection model, the multi-frame fusion technology generates the predicted road information according to the vehicle motion track and the historical road information, and the shielded part in the current road line is corrected according to the predicted road line, so that the detection robustness is enhanced, and the road line detection model is more suitable for being applied in the actual working condition.

Description

Training method of road line detection model, and road line detection method and device
Technical Field
The invention relates to the field of image processing and intelligent driving, in particular to a road line detection model training method, a road line detection method and a road line detection device.
Background
In recent years, intelligent systems have been widely used in the field of driving, with the purpose of realizing an automatic driving function or a driving assistance function. In the vision perception system for intelligent driving, lane detection is an important functional module, and it is usually required to detect a lane line from a road image around a vehicle, so as to guide driving of the vehicle.
Currently, in academic and industrial fields, many deep learning methods are developed for detecting lane lines and road boundary lines, and the methods are typically lanonenet and SCNN. LaneNet is introduced in the article "means End-to-End Lane Detection: an instant Segmentation Approach" published in IEEE IV conference 2018. The flow chart of the method is shown in fig. 1, two branches are trained by the lanonet, and after semantic segmentation of lane line points and non-lane line points is carried out on pixel points of an image, segmentation results are clustered to obtain a pixel set belonging to a plurality of lane lines. SCNN was introduced in the article published in the AAAI Conference on Artificial Intelligence Intelligence in 2018, "Spatial As Deep: Spatial CNN for Traffic Scene Understanding". The flow chart of the method is shown in fig. 2, the SCNN firstly extracts the features of the image through a convolutional neural network, and then models the transverse and longitudinal continuity of the image by using Spatial CNN.
The two methods obtain better detection effect, but have respective defects in actual working conditions. For example, when the lanonet method is used, intermittent detection results are easily obtained, and when the SCNN method is used, continuous stains on the road surface or the like are easily erroneously detected as lane lines or road boundary lines. The method is not suitable for the situation that the lane line or the road boundary line is blocked, the situation that the lane line and the road boundary line are blocked is quite common in the practical application scene, and any one of the methods interferes the detection result.
Disclosure of Invention
The invention aims to provide a road line detection model training method, a road line detection method and a road line detection device, and solves the problems that the prior art cannot accurately detect a lane line and a road boundary and the detection effect is poor.
In view of the above, the present invention provides a method for training a road line detection model, which includes:
an image of a road sample is acquired,
in particular, the road sample image comprises at least annotated road route data.
And extracting the characteristics of the road sample image through a convolution network to obtain a characteristic diagram.
Extracting deep features of the feature map by adopting a continuous linear operator,
in particular, the deep features include spatial positional continuity of the road course and linear morphology of the road course.
And carrying out supervision training by using the marked road line data according to the deep features, optimizing a deep learning network loss function, and adjusting network parameters to obtain a road line detection model.
Further, the step of extracting the deep features of the feature map by using a continuous linear operator comprises:
iteratively convolving the feature map in the horizontal and vertical directions using convolution kernels,
specifically, the lateral and longitudinal directions include from right to left, from left to right, from top to bottom, and from bottom to top.
And acquiring point probability values of the characteristic diagram in a direction.
Judging whether the feature map has a linear form in the direction according to the point probability value,
if the deep features exist, extracting the deep features of the feature map;
if not, the direction in which no linear form exists is suppressed.
Further, the step of judging whether the feature map has a linear form in the direction includes:
judging whether the sum of the point probability values in the direction is greater than a first threshold value;
if so, judging whether the difference of the point probability values in the direction is smaller than a second threshold value;
if the direction of the feature map is smaller than the preset direction, the feature map is judged to have a linear form in the direction.
Alternatively, the step of judging whether the feature map has a linear shape in the direction includes:
judging whether the difference of the point probability values in the direction is smaller than a second threshold value;
if the sum of the point probability values in the direction is smaller than the first threshold value, judging whether the sum of the point probability values in the direction is larger than the first threshold value;
if the direction of the feature map is larger than the preset direction, the feature map is judged to have a linear form in the direction.
Correspondingly, the invention also provides a road line detection method, which comprises the following steps:
acquiring a current road image, and inputting a road line detection model to generate current road information, wherein the road line detection model is obtained by supervised training of the training method of any one road line detection model;
generating predicted road information according to the vehicle motion track and the historical road information;
judging whether the current road information is associated with the predicted road information or not based on an unblocked road route and a predicted road route;
if so, correcting the current road information according to the predicted road information,
specifically, the predicted road information includes at least a predicted road route;
specifically, the current road information at least comprises an unobstructed road line;
specifically, an extended kalman filter algorithm is used to correct the current road information according to the predicted road information.
Further, before determining whether the current road information is associated with the predicted road information, the road line detection method further includes: calculating a distance between the unobstructed roadway route and the predicted roadway route.
Further, the step of determining whether the current road information is associated with the predicted road information includes:
judging whether the distance is smaller than a preset threshold value or not;
if so, determining that the current road information is associated with the predicted road information,
specifically, an expectation-maximization algorithm is employed to reduce computational errors.
Correspondingly, the invention also provides a training device of the road line detection model, which comprises:
and the road sample image acquisition unit is used for acquiring a road sample image.
And the extraction unit is used for extracting the features of the road sample image and the deep features of the feature map.
Specifically, the extraction unit includes a first extraction unit and a second extraction unit:
a first extraction unit, configured to extract features of the road sample image;
and the second extraction unit is used for extracting the deep features of the continuity and linear morphology of the feature map.
And the training unit is used for carrying out supervision training by using the marked road line data, optimizing a deep learning network loss function, and adjusting network parameters to obtain a road line detection model.
Further, the training device for the road route detection model further comprises:
the first judging unit is used for judging whether the characteristic diagram has continuity in one direction or not;
and the second judging unit is used for judging whether the characteristic diagram has a linear form in a local area.
The embodiment of the invention has the following beneficial effects:
the invention discloses a training method of a road line detection model, a road line detection method and a device, which adopt continuous linear operators to extract the spatial position continuity of a characteristic diagram and the linear form of a road line, supervise and train to obtain a deep road route detection model, generate predicted road information according to a vehicle motion track and historical road information by adopting a multi-frame fusion technology, correct the shielded part in the current road route according to the predicted road line, strengthen the detection robustness and enable the road line detection model to be more suitable for being applied in actual working conditions.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic flow chart of a LaneNet deep learning method;
FIG. 2 is a flow chart diagram of an SCNN deep learning method;
fig. 3 is a schematic flowchart of a training method of a road line detection model according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of a road route detection method according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a training device for a road route detection model according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an iterative operation according to an embodiment of the present invention;
FIG. 7 is a partial area diagram of a feature map;
FIG. 8 is a schematic view of a partially linear pattern in one direction.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings. It should be apparent that the described embodiment is only one embodiment of the invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive efforts based on the embodiments of the present invention, shall fall within the scope of protection of the present invention.
It should be noted that reference herein to "an embodiment" or "an embodiment" means that a particular feature, structure, or characteristic may be included in at least one implementation of the invention. In the description of the present invention, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated, whereby the features defined as "first" and "second" may explicitly or implicitly include one or more of such features. Also, the terms "first" and "second" are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that such usage data may be interchanged under appropriate circumstances such that embodiments of the invention described herein may be practiced in sequences other than those illustrated or described herein. It is to be understood that the terms "upper", "lower", "left", "right", and the like, indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, are only for convenience in describing the present invention or simplifying the description, but do not indicate or imply specific orientations and thus, are not to be construed as limiting the present invention. Furthermore, the terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a list of steps and elements is not necessarily limited to those steps and elements expressly listed, but may include other steps and elements not expressly listed or inherent to the method and apparatus herein.
Examples
Referring to fig. 3, which is a flow chart illustrating a method for training a road route detection model according to an embodiment of the present invention, the present specification provides the method steps as shown in the flow chart, but more or less method steps may be included based on conventional or non-creative efforts. The sequence of steps recited in the embodiments is only one of many steps and does not represent the only operation step, and in the actual training process, the steps can be executed according to the implementation sequence or the operation sequence shown in the figures. Specifically, as shown in fig. 3, the method for training the road line detection model includes:
s110, acquiring a road sample image;
it should be noted that, in the embodiment of the present specification, the road sample image at least includes labeled road route data.
S120, extracting the characteristics of the road sample image through a convolutional network to obtain a characteristic map.
S130, extracting deep features of the feature map by adopting a continuous linear operator;
it should be noted that, in the embodiment of the present specification, the deep features include spatial position continuity of the road route and linear shape of the road route.
In an embodiment of the present specification, the step of extracting deep features of the feature map by using a continuous line operator includes:
performing iterative convolution on the transverse direction and the longitudinal direction of the feature map by using a convolution kernel;
acquiring a point probability value of the characteristic diagram in a direction;
judging whether the characteristic diagram has a linear form in the direction or not according to the point probability value;
if the deep features exist, extracting the deep features of the feature map;
if not, the direction in which no linear form exists is suppressed.
Specifically, the horizontal direction and the vertical direction include four directions, namely, from right to left, from left to right, from top to bottom, and from bottom to top, the convolution kernel may be in the form of a recurrent neural network, and may also be in other forms, and the embodiments of the present specification are not particularly limited. For example, taking left to right as an example, each column of the feature map is convolved with a convolution kernel of the same structure, and the result is stored and convolved with the same set of weights acting on the next column. Specifically, as shown in fig. 6, where x represents input, o represents output, U, V, W is weight, and s is hidden layer state. In the embodiment of the present specification, the convolution kernels in four directions are consistent in structure, and the weight values in each direction are consistent.
Specifically, the step of determining whether the feature map has a linear form in the direction includes:
judging whether the sum of the point probability values in the direction is greater than a first threshold value;
if so, judging whether the difference of the point probability values in the direction is smaller than a second threshold value;
if the direction of the feature map is smaller than the preset direction, the feature map is judged to have a linear form in the direction.
Alternatively, the step of judging whether the feature map has a linear form in the direction includes:
judging whether the difference of the point probability values in the direction is smaller than a second threshold value or not;
if the sum of the point probability values in the direction is less than the first threshold value, judging whether the sum of the point probability values in the direction is greater than the first threshold value;
if the direction of the feature map is larger than the preset direction, the feature map is judged to have a linear form in the direction.
In the embodiment of the present specification, it is determined whether the sum of the point probability values is greater than a first threshold and whether the difference between the point probability values is smaller than a second threshold, where the first threshold is a certain value and the second threshold is a certain value, and when it is determined that the first threshold is consistent with the second threshold in four directions, respectively. For example, as shown in fig. 7, a local area graph of a feature graph is shown, where the gray scale indicates the probability of the existence of the road route, as shown in fig. 8, a, b, c, d in the graph surround the central point, taking the a direction as an example, the difference between the probabilities of 4,5,6 and 12,13,14 is calculated, if the difference is greater than a first threshold and smaller than a second threshold, it is considered that the a direction has a linear form, and so on, whether the b, c, d directions have linear forms is calculated respectively. For the case that a plurality of directions exist, the direction with the maximum probability sum is selected as the main direction, and other directions are restrained.
S140, according to the deep features, carrying out supervision training by using the marked road line data, optimizing a deep learning network loss function, and adjusting network parameters to obtain a road line detection model.
Referring to fig. 4, which is a flow chart illustrating a road route detection method according to an embodiment of the present invention, the present specification provides the method operation steps as shown in the flow chart, but more or less method operation steps may be included based on conventional or non-inventive labor. The sequence of steps recited in the embodiments is only one of many steps and does not represent the only operation step, and in the actual training process, the steps can be executed according to the implementation sequence or the operation sequence shown in the figures. Specifically, as shown in fig. 4, the road line detection method includes:
s210, acquiring a current road image, and inputting a road route detection model to generate current road information;
in an embodiment of the present specification, the road line detection model is a road line detection model obtained by supervised training of the training method of the road line detection model according to any one of the above items.
In an embodiment of the present specification, the current road information includes at least an unobstructed road route.
S220, generating predicted road information according to the vehicle motion track and the historical road information;
in an embodiment of the present specification, the predicted road information includes at least a predicted road route.
S230 determining whether the current road information is associated with the predicted road information based on an unobstructed road route and a predicted road route;
in an embodiment of the present specification, before determining whether the current road information is associated with the predicted road information, the road line detection method further includes: calculating a distance between the unobstructed roadway route and the predicted roadway route.
In an embodiment of the present specification, the step of determining whether the current road information is associated with the predicted road information includes:
judging whether the distance is smaller than a preset threshold value or not;
and if so, judging that the current road information is associated with the predicted road information.
And if not, judging that the current road information is not associated with the predicted road information.
Specifically, an expectation-maximization algorithm is employed to reduce computational errors.
S240, if the predicted road information is relevant, correcting the current road information according to the predicted road information;
in the embodiment of the present specification, an extended kalman filter algorithm is used to correct the current road information based on the predicted road information.
Referring to fig. 5, which is a schematic structural diagram illustrating a training apparatus for a road route detection model according to an embodiment of the present invention, the present specification provides the components as shown in the schematic structural diagram, but more or less components may be included based on conventional or non-creative efforts. The constituent element recited in the embodiment is only one of a plurality of constituent elements, and does not represent a unique constituent element, and in an actual structure, may be a constituent element shown in the drawings. Specifically, as shown in fig. 5, the training device for the road route detection model includes:
a road sample image obtaining unit 310, configured to obtain a road sample image.
An extracting unit 320, configured to extract features of the road sample image and deep features of the feature map.
In the embodiments of the present specification, the extraction unit includes a first extraction unit and a second extraction unit;
a first extraction unit, configured to extract features of the road sample image;
and the second extraction unit is used for extracting the deep features of the continuity and linear morphology of the feature map.
And the training unit 330 is configured to perform supervised training with the labeled road line data, optimize a deep learning network loss function, and adjust network parameters to obtain a road line detection model.
The training device of the road line detection model further comprises:
the first judging unit is used for judging whether the characteristic diagram has continuity in one direction;
and the second judging unit is used for judging whether the characteristic diagram has a linear form in a local area.
By adopting the training method of the road line detection model, the road line detection method and the device provided by the embodiment of the invention, the spatial position continuity of the characteristic diagram and the linear form of the road line are extracted by adopting the continuous alignment operator, the deep road line detection model is obtained by supervision and training, the predicted road information is generated by the multi-frame fusion technology according to the vehicle motion track and the historical road information, the shielded part in the current road line is corrected according to the predicted road line, the detection robustness is enhanced, and the road line detection model is more suitable for being applied in the actual working condition.
It should be noted that: the foregoing descriptions of the embodiments of the present invention are provided for illustration only, and not for the purpose of limiting the invention as defined by the appended claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
All the embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment is described with emphasis on differences from other embodiments. In particular, as for the embodiment of the apparatus, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
Those skilled in the art will appreciate that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by instructions associated with hardware via a program, which may be stored in a computer readable medium.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.

Claims (8)

1. A method for training a road route detection model is characterized by comprising the following steps:
acquiring a road sample image, wherein the road sample image at least comprises marked road route data;
extracting the characteristics of the road sample image through a convolutional network to obtain a characteristic diagram;
extracting deep features of the feature map by adopting a continuous linear operator, wherein the deep features comprise the spatial position continuity of a road line and the linear form of the road line;
according to the deep features, carrying out supervision training by using the marked road line data, optimizing a deep learning network loss function, and adjusting network parameters to obtain a road line detection model;
the step of extracting the deep features of the feature map by adopting a continuous linear operator comprises the following steps:
performing iterative convolution on the transverse direction and the longitudinal direction of the feature map by using a convolution kernel;
acquiring a point probability value of the characteristic diagram in a direction; the point probability value is the probability of the existing road route corresponding to the gray level of the point;
judging whether the characteristic diagram has a linear form in the direction or not according to the point probability value;
if the deep features exist, extracting the deep features of the feature map;
if not, the direction in which no linear form exists is suppressed.
2. The method as claimed in claim 1, wherein the step of determining whether the feature map has a linear shape in the direction comprises:
judging whether the sum of the point probability values in the direction is greater than a first threshold value;
if so, judging whether the difference of the point probability values in the direction is smaller than a second threshold value;
if the direction of the feature map is smaller than the preset direction, the feature map is judged to have a linear form in the direction.
3. A road line detection method, characterized in that the road line detection method comprises:
acquiring a current road image, and inputting a road line detection model to generate current road information, wherein the current road information at least comprises an unobstructed road line, and the road line detection model is obtained by supervised training of the training method of the road line detection model according to any one of claims 1-2;
generating predicted road information according to the vehicle motion track and historical road information, wherein the predicted road information at least comprises a predicted road route;
judging whether the current road information is associated with the predicted road information or not based on an unblocked road route and a predicted road route;
and if the predicted road information is correlated with the current road information, correcting the current road information according to the predicted road information.
4. The road line detection method according to claim 3, wherein before determining whether the current road information is associated with the predicted road information, the road line detection method further comprises:
calculating a distance between the unobstructed roadway route and the predicted roadway route.
5. The road route detection method according to claim 4, wherein the step of determining whether the current road information is associated with the predicted road information comprises:
judging whether the distance is smaller than a preset threshold value or not;
and if so, judging that the current road information is associated with the predicted road information.
6. A training device for road route detection models is characterized by comprising:
the road sample image acquisition unit is used for acquiring a road sample image;
the extraction unit is used for extracting the features of the road sample image and the deep features of the feature map, and comprises the following steps:
the step of extracting the deep features of the feature map by adopting a continuous linear operator comprises the following steps:
performing iterative convolution on the transverse direction and the longitudinal direction of the feature map by using a convolution kernel;
acquiring a point probability value of the characteristic diagram in a direction; the point probability value is the probability of the existing road route corresponding to the gray level of the point;
judging whether the characteristic diagram has a linear form in the direction or not according to the point probability value;
if the deep features exist, extracting the deep features of the feature map;
if not, suppressing the direction in which no linear form exists;
and the training unit is used for carrying out supervision training by using the marked road line data, optimizing a deep learning network loss function, and adjusting network parameters to obtain a road line detection model.
7. The training device of claim 6, wherein the extraction unit comprises a first extraction unit and a second extraction unit;
a first extraction unit, configured to extract features of the road sample image;
and the second extraction unit is used for extracting deep features of the feature map.
8. The training device of road route detection model according to claim 6, further comprising:
the first judging unit is used for judging whether the characteristic diagram has continuity in one direction;
and the second judging unit is used for judging whether the characteristic diagram has a linear form in a local area.
CN201910624027.0A 2019-07-11 2019-07-11 Training method of road line detection model, and road line detection method and device Active CN110458023B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910624027.0A CN110458023B (en) 2019-07-11 2019-07-11 Training method of road line detection model, and road line detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910624027.0A CN110458023B (en) 2019-07-11 2019-07-11 Training method of road line detection model, and road line detection method and device

Publications (2)

Publication Number Publication Date
CN110458023A CN110458023A (en) 2019-11-15
CN110458023B true CN110458023B (en) 2022-08-02

Family

ID=68482547

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910624027.0A Active CN110458023B (en) 2019-07-11 2019-07-11 Training method of road line detection model, and road line detection method and device

Country Status (1)

Country Link
CN (1) CN110458023B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111291676B (en) * 2020-02-05 2020-12-11 清华大学 Lane line detection method and device based on laser radar point cloud and camera image fusion and chip
CN111996883B (en) * 2020-08-28 2021-10-29 四川长虹电器股份有限公司 Method for detecting width of road surface

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105260699A (en) * 2015-09-10 2016-01-20 百度在线网络技术(北京)有限公司 Lane line data processing method and lane line data processing device
CN108090456A (en) * 2017-12-27 2018-05-29 北京初速度科技有限公司 A kind of Lane detection method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10628671B2 (en) * 2017-11-01 2020-04-21 Here Global B.V. Road modeling from overhead imagery

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105260699A (en) * 2015-09-10 2016-01-20 百度在线网络技术(北京)有限公司 Lane line data processing method and lane line data processing device
CN108090456A (en) * 2017-12-27 2018-05-29 北京初速度科技有限公司 A kind of Lane detection method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LaneNet: Real-Time Lane Detection Networks for Autonomous Driving;Ze Wang etal.;《http:arXiv:1807.01726v1》;20180704;全文 *
基于形态特征的车道线检测和识别技术的研究与实现;王丹丹;《中国优秀硕士学位论文全文数据库信息科技辑》;20140715;第2014年卷(第7期);第36页第1段和第39页第3段 *

Also Published As

Publication number Publication date
CN110458023A (en) 2019-11-15

Similar Documents

Publication Publication Date Title
CN106327520B (en) Moving target detection method and system
KR101609303B1 (en) Method to calibrate camera and apparatus therefor
CN103093198B (en) A kind of crowd density monitoring method and device
CN110390292B (en) Remote sensing video vehicle target detection and tracking method based on dynamic correlation model
US20180365843A1 (en) Method and system for tracking moving objects based on optical flow method
CN107833236A (en) Semantic vision positioning system and method are combined under a kind of dynamic environment
US10748013B2 (en) Method and apparatus for detecting road lane
CN108805016B (en) Head and shoulder area detection method and device
CN112883820B (en) Road target 3D detection method and system based on laser radar point cloud
CN110458023B (en) Training method of road line detection model, and road line detection method and device
CN104463165A (en) Target detection method integrating Canny operator with Vibe algorithm
CN113205138B (en) Face and human body matching method, equipment and storage medium
CN107909047A (en) A kind of automobile and its lane detection method and system of application
KR20180070258A (en) Method for detecting and learning of objects simultaneous during vehicle driving
CN110088807A (en) Separator bar identification device
CN110147748A (en) A kind of mobile robot obstacle recognition method based on road-edge detection
CN113838087B (en) Anti-occlusion target tracking method and system
CN112115878A (en) Forest fire smoke root node detection method based on smoke area density
CN108062515B (en) Obstacle detection method and system based on binocular vision and storage medium
CN114445398A (en) Method and device for monitoring state of side protection plate of hydraulic support of coal mining machine
CN107563282A (en) For unpiloted recognition methods, electronic equipment, storage medium and system
CN114299399B (en) Aircraft target confirmation method based on skeleton line relation
Yang et al. A novel vision-based framework for real-time lane detection and tracking
CN111814582B (en) Method and device for processing driver behavior monitoring image
CN117557790A (en) Training method of image mask generator and image instance segmentation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant