CN112434591A - Lane line determination method and device - Google Patents

Lane line determination method and device Download PDF

Info

Publication number
CN112434591A
CN112434591A CN202011302820.8A CN202011302820A CN112434591A CN 112434591 A CN112434591 A CN 112434591A CN 202011302820 A CN202011302820 A CN 202011302820A CN 112434591 A CN112434591 A CN 112434591A
Authority
CN
China
Prior art keywords
lane line
target image
reference point
image
lane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011302820.8A
Other languages
Chinese (zh)
Other versions
CN112434591B (en
Inventor
陈克凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011302820.8A priority Critical patent/CN112434591B/en
Publication of CN112434591A publication Critical patent/CN112434591A/en
Application granted granted Critical
Publication of CN112434591B publication Critical patent/CN112434591B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a lane line determining method and device. The method comprises the following steps: acquiring a target image containing a lane line; performing feature extraction on the target image through a deep learning model to obtain at least one lane line feature vector; determining the positions of at least two lane line reference points related to each lane line on the target image according to the feature vector of each lane line; and fitting the positions of the lane lines on the target image according to the positions of the at least two lane line reference points on the target image. The technical scheme of the embodiment of the application can reduce the calculated amount in the lane line identification process.

Description

Lane line determination method and device
Technical Field
The application relates to the technical field of computers and intelligent driving, in particular to a lane line determining method and device.
Background
In a traffic scene of lane line recognition, such as a lane line recognition scene in intelligent driving, each pixel belonging to a lane line is usually recognized by determining whether each pixel in an input picture is a lane line pixel, and then the pixel points are fitted to recognize the lane line. However, this scheme requires two steps of encoding and decoding, and is computationally expensive. Based on this, how to reduce the calculation amount of the lane line in the identification process is an urgent technical problem to be solved.
Disclosure of Invention
Embodiments of the present application provide a lane line determination method, apparatus, computer program product or computer program, computer readable medium, and electronic device, so that the amount of calculation of a lane line in a determination process can be reduced at least to a certain extent.
Other features and advantages of the present application will be apparent from the following detailed description, or may be learned by practice of the application.
According to an aspect of an embodiment of the present application, there is provided a lane line determination method including: acquiring a target image containing a lane line; performing feature extraction on the target image through a deep learning model to obtain at least one lane line feature vector; determining the positions of at least two lane line reference points related to each lane line on the target image according to the feature vector of each lane line; and fitting the positions of the lane lines on the target image according to the positions of the at least two lane line reference points on the target image.
According to an aspect of an embodiment of the present application, there is provided a lane line determination apparatus including: a first acquisition unit configured to acquire a target image including a lane line; the extraction unit is used for extracting the features of the target image through a deep learning model to obtain at least one lane line feature vector; a first determination unit, configured to determine, according to each lane line feature vector, positions of at least two lane line reference points related to each lane line on the target image; and the fitting unit is used for fitting the positions of the lane lines on the target image according to the positions of the at least two lane line reference points on the target image.
In some embodiments of the present application, based on the foregoing solution, the lane line feature vector includes: the system comprises a lane line confidence, at least two lane line reference point coordinates and a lane line reference point confidence which corresponds to the at least two lane line reference point coordinates one to one; the lane line confidence is used for representing the probability of the lane line existing in the target image, the lane line reference point confidence is used for representing the probability of the lane line existing on the lane line, and the lane line reference point coordinate is used for representing the position of the lane line reference point in the target image.
In some embodiments of the present application, based on the foregoing scheme, the first determining unit is configured to: when the confidence coefficient of the lane line in the lane line feature vector is greater than or equal to a first preset threshold, determining a lane line reference point with the confidence coefficient of the lane line reference point being greater than or equal to a second preset threshold as a target lane line reference point; and determining the position of the target lane line reference point on the target image based on the lane line reference point coordinates of the target lane line reference point.
In some embodiments of the present application, based on the foregoing scheme, the first determining unit is configured to: when the lane line confidence degree in the lane line feature vector is smaller than a first preset threshold value, determining that the lane line corresponding to the lane line confidence degree is an invalid lane line; and when the confidence coefficient of the lane line in the lane line feature vector is greater than or equal to a first preset threshold and the confidence coefficient of a lane line reference point is less than a second preset threshold, determining the lane line reference point corresponding to the confidence coefficient of the lane line reference point as an invalid lane line reference point.
In some embodiments of the present application, based on the foregoing solution, the apparatus further includes: a training unit for acquiring an original training image; marking lane line information in the original training image according to the lane line distribution in the original training image, wherein the lane line information comprises lane line confidence, at least two lane line reference point coordinates and lane line reference point confidence which corresponds to the at least two lane line reference point coordinates one by one; and training an initial deep learning model through the original training image to obtain the deep learning model.
In some embodiments of the present application, based on the foregoing solution, the training unit is configured to: performing data enhancement on the original training image marked with lane line information to obtain a derivative training image, wherein the derivative training image is marked with the lane line information; and training an initial deep learning model through the original training image and the derived training image to obtain the deep learning model.
In some embodiments of the present application, based on the foregoing solution, the extracting unit is configured to: determining channel values of all pixel points in the target image on three image channels; for each image channel, carrying out normalization processing on the channel value of each pixel point in the target image on the image channel to obtain a preprocessed target image; and performing at least two continuous convolution operations and global average pooling operation on the preprocessed target image through a deep learning model so as to perform feature extraction on the target image.
In some embodiments of the present application, based on the foregoing solution, the apparatus further includes: a second acquisition unit for acquiring camera intrinsic parameters and camera extrinsic parameters of the target image when captured by a video camera after fitting positions of a lane line on the target image according to positions of the at least two lane line reference points on the target image; a projection unit for projecting the target image into a three-dimensional image based on the in-camera parameters and the out-of-camera parameters; a second determination unit for determining a target lane in the three-dimensional image according to a position of the lane line on the target image.
In some embodiments of the present application, based on the foregoing solution, the apparatus further includes: a third obtaining unit, configured to obtain a driving decision of a vehicle on a target lane after determining the target lane in the three-dimensional image according to a position of the lane line on the target image; and the rendering unit is used for rendering the traffic guidance identification corresponding to the driving decision in the three-dimensional image.
According to an aspect of embodiments herein, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the lane line determination method as described in the above embodiments.
According to an aspect of embodiments of the present application, there is provided a computer-readable medium on which a computer program is stored, the computer program, when executed by a processor, implementing the lane line determination method as described in the above embodiments.
According to an aspect of an embodiment of the present application, there is provided an electronic device including: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the lane line determination method as described in the above embodiments.
In the technical scheme provided by some embodiments of the application, at least one lane line feature vector is obtained by performing feature extraction on a target image containing lane lines, then the positions of at least two lane line reference points related to each lane line on the target image are determined according to each lane line feature vector, and the positions of the lane lines on the target image are further fitted.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
FIG. 1 illustrates an implementation environment diagram of a solution according to an embodiment of the present application;
fig. 2 shows an application scenario of the lane line determination method according to an embodiment of the present application;
FIG. 3 shows a flow chart of a lane line determination method according to one embodiment of the present application;
FIG. 4 shows a visualization of determining lane line positions according to an embodiment of the present application;
FIG. 5 illustrates a detailed flow diagram for determining the location of at least two lane line reference points associated with each lane line on the target image according to one embodiment of the present application;
FIG. 6 illustrates a detailed flow diagram for determining the location of at least two lane line reference points associated with each lane line on the target image according to one embodiment of the present application;
FIG. 7 illustrates a flow diagram of a method of training a deep learning model according to one embodiment of the present application;
FIG. 8 shows a demonstration diagram for training a deep learning model according to an embodiment of the present application
FIG. 9 illustrates a detailed flow diagram for training an initial deep learning model with the original training images according to an embodiment of the present application;
FIG. 10 illustrates a detailed flow diagram for feature extraction of the target image by a deep learning model according to one embodiment of the present application;
FIG. 11 shows a simulation of the process of feature extraction on a pre-processed target image by a deep learning model according to one embodiment of the application;
FIG. 12 illustrates a flowchart of a method after fitting the location of a lane line on the target image, according to one embodiment of the present application;
FIG. 13 illustrates a visualization of determining a target lane in a three-dimensional image according to an embodiment of the present application;
FIG. 14 illustrates a flowchart of a method after determining a target lane in the three-dimensional image, according to one embodiment of the present application;
FIG. 15 illustrates rendering a visualization of a traffic guidance marker in a three-dimensional image according to an embodiment of the present application;
FIG. 16 illustrates an overall flow diagram of a lane line determination method according to one embodiment of the present application;
FIG. 17 shows a block diagram of a lane line determination apparatus according to one embodiment of the present application;
FIG. 18 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the application. One skilled in the relevant art will recognize, however, that the subject matter of the present application can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the application.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Fig. 1 shows an implementation environment diagram of a technical solution according to an embodiment of the present application.
As shown in fig. 1, an implementation environment of the technical solution of the embodiment of the present application may include a terminal device. For example, any one of the smart phone 101, the tablet computer 102, the touch display 103 and the portable computer 104 shown in fig. 1 is included, but other electronic devices with touch display function and the like are also possible.
In an embodiment of the present application, a user may implement the technical solution of the embodiment of the present application by using a smartphone with a touch display function, such as the smartphone 101 shown in fig. 1. Specifically, the smart phone may obtain a target image including a lane line, perform feature extraction on the target image through a deep learning model to obtain at least one lane line feature vector, determine positions of at least two lane line reference points related to each lane line on the target image according to each lane line feature vector, and fit positions of the lane line on the target image according to the positions of the at least two lane line reference points on the target image.
To make the present application more intuitive for those skilled in the art, a specific example will be described herein.
In a specific example of an embodiment, the technical solution of the present application may be applied with reference to the application scenario shown in fig. 2, and fig. 2 shows an application scenario diagram of the lane line determining method according to an embodiment of the present application.
As shown in fig. 2, a vehicle 201 is traveling on a road 200, wherein the road 200 includes A, B, C, D, E, F six lane lines, and in some scenarios, such as smart driving scenarios, the vehicle 201 needs to identify the lane lines. Specifically, a target image including a lane line may be acquired by a vehicle-mounted device (e.g., a vehicle-mounted computer) on the vehicle 201, where the target image including the lane line may be captured by a camera arranged on the vehicle, and then the vehicle-mounted device performs feature extraction on the target image through a deep learning model to obtain at least one lane line feature vector, and then determines, according to each lane line feature vector, a position of at least two lane line reference points related to each lane line on the target image, and finally fits, according to the positions of the at least two lane line reference points on the target image, the position of the lane line on the target image.
It should be noted that the lane line determining method provided in the embodiment of the present application is generally executed by an in-vehicle device provided on a vehicle, and accordingly, a lane line determining apparatus is generally provided in the in-vehicle device. However, in other embodiments of the present application, the server connected to the vehicle-mounted device through the network may also have a similar function to the vehicle-mounted device, so as to execute the lane line determination scheme provided in the embodiments of the present application.
The implementation details of the technical solution of the embodiment of the present application are set forth in detail below:
fig. 3 shows a flowchart of a lane line determination method according to an embodiment of the present application, which may be performed by a device having a calculation processing function, such as the smartphone 101 shown in fig. 1. Referring to fig. 3, the lane line determining method at least includes steps 310 to 370, which are described in detail as follows:
in step 310, a target image including a lane line is acquired.
In the present application, an image of an object including a lane line may be captured by an on-vehicle camera provided on a vehicle.
In step 330, feature extraction is performed on the target image through a deep learning model to obtain at least one lane line feature vector.
In one embodiment of the present application, the lane line feature vector includes: the confidence coefficient of the lane line, the coordinates of at least two lane line reference points and the confidence coefficient of the lane line reference point corresponding to the coordinates of the at least two lane line reference points one by one.
The lane line confidence is used for representing the probability of the lane line existing in the target image, the lane line reference point confidence is used for representing the probability of the lane line existing on the lane line, and the lane line reference point coordinate is used for representing the position of the lane line reference point in the target image.
In order to make the lane line feature vector better understood by those skilled in the art, the lane line feature vector will be further described with reference to fig. 4:
fig. 4 shows a visualization of determining lane line positions according to an embodiment of the present application.
In the application, at least one lane line feature vector can be obtained by performing feature extraction on a target image containing a lane line, wherein the lane line feature vector is a mathematical expression of the position of the lane line in the target image, and one lane line feature vector can be used for describing the position of one lane line in the target image.
Specifically, in one example, as shown in fig. 4, six lane lines are included in the road 400 together with A, B, C, D, E, F, and the image 402 is a target image that can be captured by a camera provided on the vehicle 401.
After the feature extraction is performed on the target image 402, six lane line feature vectors may be obtained, which are [ a A1 a2 A3 a1 a2 A3], [ B B1B 2B 3B 1B 2B 3], [ C C1C 2C 3C 1C 2C 3], [ D D1D 2D 3D 1D 2D 3], [ E E1E 2E 3E 1E 2E 3], [ F F1F 2F 3F 1F 2F 3 ]. Wherein each lane line feature vector is used to describe the position of a lane line in the target image, for example, the lane line feature vector [ a A1 a2 A3 a1 a2 A3] is used to describe the position of the lane line a in the target image, and the lane line feature vector [ B B1B 2B 3B 1B 2B 3] is used to describe the position of the lane line B in the target image.
Further, a, b, c, d, e, f may be lane line confidences for lane lines A, B, C, D, E, F, respectively. "a 1, a2, a 3", "b 1, b2, b 3", "c 1, c2, c 3", "d 1, d2, d 3", "e 1, e2, e 3", "f 1, f2, f 3" may be the coordinates (lateral coordinates) of three lane line reference points on lane line A, B, C, D, E, F, respectively. "a 1, a2, A3", "B1, B2, B3", "C1, C2, C3", "D1, D2, D3", "E1, E2, E3", "F1, F2, F3" may be lane line reference point confidences that correspond one-to-one to three lane line reference points on the lane line A, B, C, D, E, F, respectively.
For example, a is a lane line confidence of the lane line a, which may be used to indicate the probability of the lane line a existing in the target image, a1 is a coordinate (lateral coordinate) of a lane line reference point on the lane line a, which may be used to indicate the (lateral) position of the lane line reference point in the target image, and a1 is a lane line reference point confidence of the lane line reference point, which may be used to indicate the probability of the lane line reference point existing on the lane line a.
In the present application, the coordinates of the lane line reference point may be a value of 0 to 1, where "0" and "1" represent the coordinates of the two lateral edges of the target image, respectively.
In the above example, if the output a1 is a value of 0 to 1, for example, a is 0.35, the lateral coordinate corresponding to a1 may be 1280 × 0.35 — 448, where "1280" is the width of the target image. It should be further noted that the form of the lane line feature vector is not limited to the one-dimensional vector described in the above example, and in other examples, the form may also be a multidimensional vector, for example, the lane line feature vector of the lane line a may also be represented as follows:
Figure BDA0002787429670000091
where a is the lane line confidence of the lane line a, "a 1, a2, A3" are the coordinates of the three lane line reference points on the lane line a, "a 1, a2, A3" are the lane line reference point confidence levels corresponding one-to-one to the three lane line reference points on the lane line a.
It should be further noted that the lane line reference point coordinates in the lane line feature vector are not limited to three in the above example, and in other examples, the number of the lane line reference point coordinates may also be two, four, or more, for example, the lane line feature vector of the lane line a may also be: [ a A1 a2 A3 a4 A5 a1 a2 A3 a4 A5], wherein a is the lane line confidence of the lane line a, "a 1, a2, A3, a4, A5" are the coordinates of five lane line reference points on the lane line a, and "a 1, a2, A3, a4, A5" are the lane line reference point confidences corresponding to the five lane line reference points on the lane line a one to one. In one embodiment of the present application, the deep learning model may be trained according to the steps shown in fig. 7.
Referring to FIG. 7, a flow diagram of a method of training a deep learning model is shown, according to one embodiment of the present application. Specifically, the method comprises steps 341 to 343:
step 341, obtain the original training image.
Step 342, marking lane line information in the original training image according to the lane line distribution in the original training image, wherein the lane line information comprises lane line confidence, at least two lane line reference point coordinates, and lane line reference point confidence corresponding to the at least two lane line reference point coordinates one to one.
Step 343, training an initial deep learning model through the original training image to obtain the deep learning model.
In order to make the training process of the deep learning model better understood by those skilled in the art, the following description will be made with reference to fig. 8:
referring to fig. 8, a diagram illustrating a training of a deep learning model according to an embodiment of the present application is shown.
Specifically, the original training image labeled with the lane line confidence, the lane line reference point confidence and the lane line reference point coordinates may be input into the initial deep learning model, wherein the lane line confidence and the lane line reference point confidence may be trained through binary cross entropy loss, and the lane line reference point coordinates may be trained through mean square error loss.
For the binary cross entropy loss, under the condition of bisection, the final result to be predicted by the model has only two conditions, and the probability obtained by prediction of each category is p and 1-p, and the expression is as follows:
Figure BDA0002787429670000101
wherein, yiA flag indicating a sample i, the positive class is 1, and the negative class is 0; p is a radical ofiRepresenting the probability that sample i is predicted to be positive.
Further, for the loss of mean square error, the following is defined:
Figure BDA0002787429670000102
wherein, yiIndicating the correct answer, y, for the ith data in a batch of datai' denotes the predicted value of the ith data given by the neural network, and MSE denotes a function of the average error of a batch of data.
In this embodiment, training the initial deep learning model through the original training image may be performed according to the steps shown in fig. 9.
Referring to fig. 9, a detailed flow diagram for training an initial deep learning model by the original training image is shown according to an embodiment of the present application. The method specifically comprises the following steps of 3431 to 3432:
step 3431, performing data enhancement on the original training image marked with the lane line information to obtain a derivative training image, wherein the derivative training image is marked with the lane line information.
Step 3432, training an initial deep learning model through the original training image and the derived training image to obtain the deep learning model.
In the present application, data enhancement can artificially extend a training data set by letting limited data produce more equivalent data, which can overcome the technical problem of insufficient training data, resulting in better training results.
In the present application, data enhancement may be in a variety of ways, such as geometric transformation, color transformation, scaling transformation, random wipe, flip transformation, move and crop, and so forth.
In one embodiment of the present application, feature extraction of the target image by the deep learning model may be performed according to the steps shown in fig. 10.
Referring to fig. 10, a detailed flowchart of feature extraction on the target image by a deep learning model according to an embodiment of the application is shown. Specifically, the method comprises steps 331 to 333:
and 331, determining channel values of all pixel points in the target image on three image channels.
Step 332, for each image channel, performing normalization processing on the channel value of each pixel point in the target image on the image channel to obtain a preprocessed target image.
And 333, performing at least two continuous convolution operations and global average pooling operation on the preprocessed target image through a deep learning model to extract features of the target image.
Specifically, referring to fig. 11, a process simulation diagram of feature extraction performed on a pre-processing target image by a deep learning model according to an embodiment of the present application is shown.
As shown in fig. 11, the normalized image data is first subjected to at least two consecutive convolution operations to obtain intermediate feature layer data, where the down-sampling multiple is 16, and then the intermediate feature layer data is subjected to global average pooling to obtain one-dimensional lane line feature vectors.
In the subsequent processing process, the feature vector can be analyzed to obtain the coordinates of the lane line reference point in the target image.
In the method, the channel values of the pixel points in the target image on the image channel are subjected to normalization processing, so that a model for extracting the features of the target image can be converged more quickly, the calculation complexity of the lane line in the detection process is reduced, the operation speed is increased, the consumption of calculation resources is reduced, and the lane line can stably run on vehicle-mounted equipment with poor calculation performance in real time during detection.
With continued reference to fig. 3, in step 350, the positions of at least two lane line reference points associated with each lane line on the target image are determined according to each lane line feature vector.
In the present application, determining the positions of at least two lane line reference points associated with each lane line on the target image may be performed according to the steps shown in fig. 5.
Referring to FIG. 5, a detailed flow diagram for determining the location of at least two lane line reference points associated with each lane line on the target image is shown, according to one embodiment of the present application. Specifically, the method comprises steps 351 to 352:
step 351, when the confidence of the lane line in the lane line feature vector is greater than or equal to a first preset threshold, determining the lane line reference point with the confidence of the lane line reference point greater than or equal to a second preset threshold as a target lane line reference point.
Referring to the example in fig. 4, when the lane line confidence in the lane line feature vector is greater than or equal to the first predetermined threshold, it indicates that a lane line corresponding to the lane line confidence exists in the target image 402, for example, the lane line A, B, C, D, E in the target image 402. When the confidence of the lane line reference point is greater than or equal to the second predetermined threshold, it indicates that there is a lane line reference point corresponding to the confidence of the lane line reference point in the target image 402, for example, the lane line reference points a1, a2, b1, b2, b3, c1, c2, c3, d1, d2, d3, e1, e2 in the target image 402.
Step 352, determining the position of the target lane line reference point on the target image based on the lane line reference point coordinates of the target lane line reference point.
In the present application, in the lane line feature vector obtained by feature extraction, the coordinates of the lane line reference point may refer to the lateral coordinates of the lane line reference point, and if the lateral coordinates of the lane line reference point are a value of 0 to 1, the actual lateral coordinates of the lane line reference point in the target image may be:
a x image width
In one lane line feature vector, at least two lane line reference points are included, for example, three lane line reference points may be included, or four lane line reference points may be included.
It should be noted that the number of the lane line reference points included in each lane line feature vector is consistent, and one lane line reference point in each lane line feature vector is on one horizontal bisector in the target image, for example, in fig. 4, "a 1, b1, c1, d1, e1, f 1" on the bisector "L1", "a 2, b2, c2, d2, e2, f 2" on the bisector "L2", and "a 3, b3, c3, d3, e3, f 3" on the bisector "L3".
In this application, the actual longitudinal coordinate of the lane line reference point in the target image may be determined according to the number of the lane line reference points included in the lane line feature vector, specifically, for example, the number of the lane line reference points is 6, the image height is 420, and then the fixed longitudinal coordinate of the lane line reference point may be 60, 120, 180, 240, 300, and 360, respectively.
Based on the above procedure, the position of the target lane line reference point on the target image can be determined according to the lane line reference point coordinates of the target lane line reference point, for example, the position coordinate on the image corresponding to the a1 point in the above example is (60, 448).
In the present application, determining the positions of at least two lane line reference points associated with each lane line on the target image may also be performed according to the steps shown in fig. 6.
Referring to FIG. 6, a detailed flow diagram of determining the location of at least two lane line reference points associated with each lane line on the target image is shown, according to one embodiment of the present application. Specifically, steps 353 to 354:
step 353, when the confidence of the lane line in the lane line feature vector is smaller than a first preset threshold, determining that the lane line corresponding to the confidence of the lane line is an invalid lane line.
Referring to the example in fig. 4, when the confidence of the lane line in the feature vector of the lane line is smaller than the first predetermined threshold, it indicates that the lane line corresponding to the confidence of the lane line in the target image 402 is an invalid lane line, for example, the lane line F in the target image 402.
Step 354, when the confidence of the lane line in the feature vector of the lane line is greater than or equal to a first predetermined threshold and the confidence of the reference point of the lane line is less than a second predetermined threshold, determining that the reference point of the lane line corresponding to the confidence of the reference point of the lane line is an invalid reference point of the lane line.
When the confidence of the lane line in the lane line feature vector is greater than or equal to a first predetermined threshold and the confidence of the lane line reference point is less than a second predetermined threshold, it indicates that the lane line exists in the target image 402 but the lane line reference point corresponding to the confidence of the lane line reference point is an invalid lane line reference point, such as the lane line reference points a3 and e3 in the target image 402.
With continued reference to fig. 3, in step 370, the position of the lane line on the target image is fitted according to the positions of the at least two lane line reference points on the target image.
In the present application, the proposed lane line determination scheme may be used for AR navigation.
Specifically, in an embodiment of the present application, after fitting the positions of the lane lines on the target image according to the positions of the at least two lane line reference points on the target image, the steps shown in fig. 12 may be further performed.
Referring to FIG. 12, a flowchart of a method after fitting the location of a lane line on the target image is shown, according to one embodiment of the present application. The method specifically comprises the following steps 381 to 383:
and 381, acquiring camera internal parameters and camera external parameters of the target image when the target image is shot by a video camera.
In the present application, the camera intrinsic parameters are parameters related to the characteristics of the camera itself, such as the focal length, pixel size, and the like of the camera; the camera-out parameters are parameters in a world coordinate system, such as the position, rotation direction, etc. of the camera.
Step 382, projecting the target image into a three-dimensional image based on the camera intrinsic parameters and the camera extrinsic parameters.
And 383, determining a target lane in the three-dimensional image according to the position of the lane line on the target image.
Fig. 13 shows a visualization diagram for determining a target lane in a three-dimensional image according to an embodiment of the present application.
In this embodiment, after determining a target lane in the three-dimensional image according to the position of the lane line on the target image, the steps shown in fig. 14 may be further performed.
Referring to fig. 14, a flowchart of a method after determining a target lane in the three-dimensional image according to one embodiment of the present application is shown. Specifically, the method comprises steps 391 to 392:
and step 391, acquiring a driving decision of the vehicle on the target lane.
Step 392, rendering a traffic guidance sign corresponding to the driving decision in the three-dimensional image.
As shown in fig. 15, which is a visualization diagram for rendering traffic guidance signs in three-dimensional images according to an embodiment of the present application, it can be seen that, in fig. 15, a traffic guidance sign changing to the right is given according to a driving decision.
In order to make the present application better understood by those skilled in the art, the overall flow of the present application will be described with reference to fig. 16.
Referring to fig. 16, an overall flow chart of a lane line determination method according to one embodiment of the present application is shown.
In the training phase of the deep learning model, steps 1601 to 1605 are included:
step 1601, training image data is acquired.
Step 1602, perform data annotation in the training image.
Step 1603, data enhancement processing is performed on the training image data.
Step 1604, training the initial deep learning model.
Step 1605, obtaining a deep learning model.
In the application phase of the deep learning model, steps 1610 to 1650 are included:
step 1610, inputting a target image.
Step 1620, pre-processing the target image.
Step 1630, feature extraction is performed on the target image through the trained deep learning model.
Step 1640, process the lane line feature vectors.
Step 1650, determining lane line position.
In the technical scheme provided by some embodiments of the application, at least one lane line feature vector is obtained by performing feature extraction on a target image containing lane lines, then the positions of at least two lane line reference points related to each lane line on the target image are determined according to each lane line feature vector, and the positions of the lane lines on the target image are further fitted.
Embodiments of the apparatus of the present application are described below, which may be used to perform the lane line determining method in the above-described embodiments of the present application. For details that are not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the lane line determining method described above in the present application.
Fig. 17 shows a block diagram of a lane line determination apparatus according to an embodiment of the present application.
Referring to fig. 17, a lane line determining apparatus 1700 according to an embodiment of the present application includes: a first obtaining unit 1701, an extracting unit 1702, a first determining unit 1703, and a fitting unit 1704.
A first acquisition unit 1701 for acquiring a target image including a lane line; an extracting unit 1702, configured to perform feature extraction on the target image through a deep learning model to obtain at least one lane line feature vector; a first determining unit 1703, configured to determine, according to each lane line feature vector, positions of at least two lane line reference points related to each lane line on the target image; a fitting unit 1704, configured to fit the positions of the lane lines on the target image according to the positions of the at least two lane line reference points on the target image.
In some embodiments of the present application, based on the foregoing solution, the lane line feature vector includes: the system comprises a lane line confidence, at least two lane line reference point coordinates and a lane line reference point confidence which corresponds to the at least two lane line reference point coordinates one to one; the lane line confidence is used for representing the probability of the lane line existing in the target image, the lane line reference point confidence is used for representing the probability of the lane line existing on the lane line, and the lane line reference point coordinate is used for representing the position of the lane line reference point in the target image.
In some embodiments of the present application, based on the foregoing scheme, the first determining unit 1703 is configured to: when the confidence coefficient of the lane line in the lane line feature vector is greater than or equal to a first preset threshold, determining a lane line reference point with the confidence coefficient of the lane line reference point being greater than or equal to a second preset threshold as a target lane line reference point; and determining the position of the target lane line reference point on the target image based on the lane line reference point coordinates of the target lane line reference point.
In some embodiments of the present application, based on the foregoing scheme, the first determining unit 1703 is configured to: when the lane line confidence degree in the lane line feature vector is smaller than a first preset threshold value, determining that the lane line corresponding to the lane line confidence degree is an invalid lane line; and when the confidence coefficient of the lane line in the lane line feature vector is greater than or equal to a first preset threshold and the confidence coefficient of a lane line reference point is less than a second preset threshold, determining the lane line reference point corresponding to the confidence coefficient of the lane line reference point as an invalid lane line reference point.
In some embodiments of the present application, based on the foregoing solution, the apparatus further includes: a training unit for acquiring an original training image; marking lane line information in the original training image according to the lane line distribution in the original training image, wherein the lane line information comprises lane line confidence, at least two lane line reference point coordinates and lane line reference point confidence which corresponds to the at least two lane line reference point coordinates one by one; and training an initial deep learning model through the original training image to obtain the deep learning model.
In some embodiments of the present application, based on the foregoing solution, the training unit is configured to: performing data enhancement on the original training image marked with lane line information to obtain a derivative training image, wherein the derivative training image is marked with the lane line information; and training an initial deep learning model through the original training image and the derived training image to obtain the deep learning model.
In some embodiments of the present application, based on the foregoing scheme, the extracting unit 1702 is configured to: determining channel values of all pixel points in the target image on three image channels; for each image channel, carrying out normalization processing on the channel value of each pixel point in the target image on the image channel to obtain a preprocessed target image; and performing at least two continuous convolution operations and global average pooling operation on the preprocessed target image through a deep learning model so as to perform feature extraction on the target image.
In some embodiments of the present application, based on the foregoing solution, the apparatus further includes: a second acquisition unit for acquiring camera intrinsic parameters and camera extrinsic parameters of the target image when captured by a video camera after fitting positions of a lane line on the target image according to positions of the at least two lane line reference points on the target image; a projection unit for projecting the target image into a three-dimensional image based on the in-camera parameters and the out-of-camera parameters; a second determination unit for determining a target lane in the three-dimensional image according to a position of the lane line on the target image.
In some embodiments of the present application, based on the foregoing solution, the apparatus further includes: a third obtaining unit, configured to obtain a driving decision of a vehicle on a target lane after determining the target lane in the three-dimensional image according to a position of the lane line on the target image; and the rendering unit is used for rendering the traffic guidance identification corresponding to the driving decision in the three-dimensional image.
FIG. 18 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
It should be noted that the computer system 1800 of the electronic device shown in fig. 18 is only an example, and should not bring any limitation to the function and the scope of the application of the embodiments.
As shown in fig. 18, computer system 1800 includes a Central Processing Unit (CPU)1801, which may perform various appropriate actions and processes, such as executing the methods described in the above embodiments, according to a program stored in a Read-Only Memory (ROM) 1802 or a program loaded from a storage portion 1808 into a Random Access Memory (RAM) 1803. In the RAM 1803, various programs and data necessary for system operation are also stored. The CPU 1801, ROM 1802, and RAM 1803 are connected to each other via a bus 1804. An Input/Output (I/O) interface 1805 is also connected to bus 1804.
The following components are connected to the I/O interface 1805: an input portion 1806 including a keyboard, a mouse, and the like; an output section 1807 including a Display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage portion 1808 including a hard disk and the like; and a communication section 1809 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 1809 performs communication processing via a network such as the internet. A driver 1810 is also connected to the I/O interface 1805 as needed. A removable medium 1811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1810 as necessary, so that a computer program read out therefrom is installed into the storage portion 1808 as necessary.
In particular, according to embodiments of the application, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communication portion 1809, and/or installed from the removable media 1811. The computer program executes various functions defined in the system of the present application when executed by a Central Processing Unit (CPU) 1801.
It should be noted that the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
As another aspect, the present application also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided in the first aspect or the various alternative implementations of the first aspect.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method described in the above embodiments.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the application. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which can be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiments of the present application.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the embodiments disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. A lane line determination method, the method comprising:
acquiring a target image containing a lane line;
performing feature extraction on the target image through a deep learning model to obtain at least one lane line feature vector;
determining the positions of at least two lane line reference points related to each lane line on the target image according to the feature vector of each lane line;
and fitting the positions of the lane lines on the target image according to the positions of the at least two lane line reference points on the target image.
2. The method of claim 1, wherein the lane line feature vector comprises: the system comprises a lane line confidence, at least two lane line reference point coordinates and a lane line reference point confidence which corresponds to the at least two lane line reference point coordinates one to one;
the lane line confidence is used for representing the probability of the lane line existing in the target image, the lane line reference point confidence is used for representing the probability of the lane line existing on the lane line, and the lane line reference point coordinate is used for representing the position of the lane line reference point in the target image.
3. The method of claim 2, wherein determining the location of at least two lane line reference points associated with each lane line on the target image from each lane line feature vector comprises:
when the confidence coefficient of the lane line in the lane line feature vector is greater than or equal to a first preset threshold, determining a lane line reference point with the confidence coefficient of the lane line reference point being greater than or equal to a second preset threshold as a target lane line reference point;
and determining the position of the target lane line reference point on the target image based on the lane line reference point coordinates of the target lane line reference point.
4. The method of claim 3, further comprising:
when the lane line confidence degree in the lane line feature vector is smaller than a first preset threshold value, determining that the lane line corresponding to the lane line confidence degree is an invalid lane line;
and when the confidence coefficient of the lane line in the lane line feature vector is greater than or equal to a first preset threshold and the confidence coefficient of a lane line reference point is less than a second preset threshold, determining the lane line reference point corresponding to the confidence coefficient of the lane line reference point as an invalid lane line reference point.
5. The method of claim 1, wherein the deep learning model is trained by:
acquiring an original training image;
marking lane line information in the original training image according to the lane line distribution in the original training image, wherein the lane line information comprises lane line confidence, at least two lane line reference point coordinates and lane line reference point confidence which corresponds to the at least two lane line reference point coordinates one by one;
and training an initial deep learning model through the original training image to obtain the deep learning model.
6. The method of claim 5, wherein training an initial deep learning model over the original training image comprises:
performing data enhancement on the original training image marked with lane line information to obtain a derivative training image, wherein the derivative training image is marked with the lane line information;
and training an initial deep learning model through the original training image and the derived training image to obtain the deep learning model.
7. The method of claim 1, wherein the feature extraction of the target image through the deep learning model comprises:
determining channel values of all pixel points in the target image on three image channels;
for each image channel, carrying out normalization processing on the channel value of each pixel point in the target image on the image channel to obtain a preprocessed target image;
and performing at least two continuous convolution operations and global average pooling operation on the preprocessed target image through a deep learning model so as to perform feature extraction on the target image.
8. The method of claim 1, wherein after fitting the position of the lane line on the target image according to the positions of the at least two lane line reference points on the target image, the method further comprises:
acquiring camera internal parameters and camera external parameters of the target image when the target image is shot by a video camera;
projecting the target image into a three-dimensional image based on the in-camera parameters and the out-of-camera parameters;
and determining a target lane in the three-dimensional image according to the position of the lane line on the target image.
9. The method of claim 8, wherein after determining a target lane in the three-dimensional image based on the position of the lane line on the target image, the method further comprises:
acquiring a driving decision of a vehicle on the target lane;
rendering a traffic guidance logo corresponding to the driving decision in the three-dimensional image.
10. A lane line determination apparatus, characterized in that the apparatus comprises:
a first acquisition unit configured to acquire a target image including a lane line;
the extraction unit is used for extracting the features of the target image through a deep learning model to obtain at least one lane line feature vector;
a first determination unit, configured to determine, according to each lane line feature vector, positions of at least two lane line reference points related to each lane line on the target image;
and the fitting unit is used for fitting the positions of the lane lines on the target image according to the positions of the at least two lane line reference points on the target image.
CN202011302820.8A 2020-11-19 2020-11-19 Lane line determination method and device Active CN112434591B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011302820.8A CN112434591B (en) 2020-11-19 2020-11-19 Lane line determination method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011302820.8A CN112434591B (en) 2020-11-19 2020-11-19 Lane line determination method and device

Publications (2)

Publication Number Publication Date
CN112434591A true CN112434591A (en) 2021-03-02
CN112434591B CN112434591B (en) 2022-06-17

Family

ID=74694490

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011302820.8A Active CN112434591B (en) 2020-11-19 2020-11-19 Lane line determination method and device

Country Status (1)

Country Link
CN (1) CN112434591B (en)

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130141520A1 (en) * 2011-12-02 2013-06-06 GM Global Technology Operations LLC Lane tracking system
JP2015067030A (en) * 2013-09-27 2015-04-13 日産自動車株式会社 Driving assist system
CN107590470A (en) * 2017-09-18 2018-01-16 浙江大华技术股份有限公司 A kind of method for detecting lane lines and device
CN108009524A (en) * 2017-12-25 2018-05-08 西北工业大学 A kind of method for detecting lane lines based on full convolutional network
CN108090456A (en) * 2017-12-27 2018-05-29 北京初速度科技有限公司 A kind of Lane detection method and device
US20180181817A1 (en) * 2015-09-10 2018-06-28 Baidu Online Network Technology (Beijing) Co., Ltd. Vehicular lane line data processing method, apparatus, storage medium, and device
CN108216229A (en) * 2017-09-08 2018-06-29 北京市商汤科技开发有限公司 The vehicles, road detection and driving control method and device
CN109214334A (en) * 2018-09-03 2019-01-15 百度在线网络技术(北京)有限公司 Lane line treating method and apparatus
US20190035101A1 (en) * 2017-07-27 2019-01-31 Here Global B.V. Method, apparatus, and system for real-time object detection using a cursor recurrent neural network
US20190251372A1 (en) * 2018-02-13 2019-08-15 Kpit Technologies Ltd System and method for lane detection
CN110232368A (en) * 2019-06-20 2019-09-13 百度在线网络技术(北京)有限公司 Method for detecting lane lines, device, electronic equipment and storage medium
CN110263709A (en) * 2019-06-19 2019-09-20 百度在线网络技术(北京)有限公司 Driving Decision-making method for digging and device
CN110263714A (en) * 2019-06-20 2019-09-20 百度在线网络技术(北京)有限公司 Method for detecting lane lines, device, electronic equipment and storage medium
CN110276293A (en) * 2019-06-20 2019-09-24 百度在线网络技术(北京)有限公司 Method for detecting lane lines, device, electronic equipment and storage medium
CN110688971A (en) * 2019-09-30 2020-01-14 上海商汤临港智能科技有限公司 Method, device and equipment for detecting dotted lane line
CN111095291A (en) * 2018-02-27 2020-05-01 辉达公司 Real-time detection of lanes and boundaries by autonomous vehicles
CN111259707A (en) * 2018-12-03 2020-06-09 初速度(苏州)科技有限公司 Training method of special linear lane line detection model
CN111310737A (en) * 2020-03-26 2020-06-19 深圳极视角科技有限公司 Lane line detection method and device
CN111368605A (en) * 2018-12-26 2020-07-03 易图通科技(北京)有限公司 Lane line extraction method and device
CN111460073A (en) * 2020-04-01 2020-07-28 北京百度网讯科技有限公司 Lane line detection method, apparatus, device, and storage medium
CN111476062A (en) * 2019-01-23 2020-07-31 北京市商汤科技开发有限公司 Lane line detection method and device, electronic equipment and driving system
CN111738034A (en) * 2019-03-25 2020-10-02 杭州海康威视数字技术股份有限公司 Method and device for detecting lane line

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130141520A1 (en) * 2011-12-02 2013-06-06 GM Global Technology Operations LLC Lane tracking system
JP2015067030A (en) * 2013-09-27 2015-04-13 日産自動車株式会社 Driving assist system
US20180181817A1 (en) * 2015-09-10 2018-06-28 Baidu Online Network Technology (Beijing) Co., Ltd. Vehicular lane line data processing method, apparatus, storage medium, and device
US20190035101A1 (en) * 2017-07-27 2019-01-31 Here Global B.V. Method, apparatus, and system for real-time object detection using a cursor recurrent neural network
CN108216229A (en) * 2017-09-08 2018-06-29 北京市商汤科技开发有限公司 The vehicles, road detection and driving control method and device
CN107590470A (en) * 2017-09-18 2018-01-16 浙江大华技术股份有限公司 A kind of method for detecting lane lines and device
CN108009524A (en) * 2017-12-25 2018-05-08 西北工业大学 A kind of method for detecting lane lines based on full convolutional network
CN108090456A (en) * 2017-12-27 2018-05-29 北京初速度科技有限公司 A kind of Lane detection method and device
US20190251372A1 (en) * 2018-02-13 2019-08-15 Kpit Technologies Ltd System and method for lane detection
CN111095291A (en) * 2018-02-27 2020-05-01 辉达公司 Real-time detection of lanes and boundaries by autonomous vehicles
CN109214334A (en) * 2018-09-03 2019-01-15 百度在线网络技术(北京)有限公司 Lane line treating method and apparatus
CN111259707A (en) * 2018-12-03 2020-06-09 初速度(苏州)科技有限公司 Training method of special linear lane line detection model
CN111368605A (en) * 2018-12-26 2020-07-03 易图通科技(北京)有限公司 Lane line extraction method and device
CN111476062A (en) * 2019-01-23 2020-07-31 北京市商汤科技开发有限公司 Lane line detection method and device, electronic equipment and driving system
CN111738034A (en) * 2019-03-25 2020-10-02 杭州海康威视数字技术股份有限公司 Method and device for detecting lane line
CN110263709A (en) * 2019-06-19 2019-09-20 百度在线网络技术(北京)有限公司 Driving Decision-making method for digging and device
CN110232368A (en) * 2019-06-20 2019-09-13 百度在线网络技术(北京)有限公司 Method for detecting lane lines, device, electronic equipment and storage medium
CN110263714A (en) * 2019-06-20 2019-09-20 百度在线网络技术(北京)有限公司 Method for detecting lane lines, device, electronic equipment and storage medium
CN110276293A (en) * 2019-06-20 2019-09-24 百度在线网络技术(北京)有限公司 Method for detecting lane lines, device, electronic equipment and storage medium
CN110688971A (en) * 2019-09-30 2020-01-14 上海商汤临港智能科技有限公司 Method, device and equipment for detecting dotted lane line
CN111310737A (en) * 2020-03-26 2020-06-19 深圳极视角科技有限公司 Lane line detection method and device
CN111460073A (en) * 2020-04-01 2020-07-28 北京百度网讯科技有限公司 Lane line detection method, apparatus, device, and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
WEI LIU等: "Vision-Based Real-Time Lane Marking Detection and Tracking", 《PROCEEDINGS OF THE 11TH INTERNATIONAL IEEE CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS》 *
李梦: "基于机器视觉的车道线在线识别系统设计", 《工程设计学报》 *
梁乐颖: "基于深度学习的车道线检测算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Also Published As

Publication number Publication date
CN112434591B (en) 2022-06-17

Similar Documents

Publication Publication Date Title
US11436739B2 (en) Method, apparatus, and storage medium for processing video image
CN109426801B (en) Lane line instance detection method and device
CN110622177B (en) Instance partitioning
CN107944450B (en) License plate recognition method and device
EP3786835A1 (en) Traffic image recognition method and apparatus, and computer device and medium
CN111860398B (en) Remote sensing image target detection method and system and terminal equipment
CN110852327A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112712036A (en) Traffic sign recognition method and device, electronic equipment and computer storage medium
CN114549369B (en) Data restoration method and device, computer and readable storage medium
CN113326826A (en) Network model training method and device, electronic equipment and storage medium
CN111104941B (en) Image direction correction method and device and electronic equipment
CN117218622A (en) Road condition detection method, electronic equipment and storage medium
CN111382695A (en) Method and apparatus for detecting boundary points of object
CN117315406B (en) Sample image processing method, device and equipment
CN113223011B (en) Small sample image segmentation method based on guide network and full-connection conditional random field
CN112434591B (en) Lane line determination method and device
CN110969640A (en) Video image segmentation method, terminal device and computer-readable storage medium
CN114973268A (en) Text recognition method and device, storage medium and electronic equipment
CN113011268A (en) Intelligent vehicle navigation method and device, electronic equipment and storage medium
CN112654998A (en) Lane line detection method and device
CN115578246B (en) Non-aligned visible light and infrared mode fusion target detection method based on style migration
CN116740682B (en) Vehicle parking route information generation method, device, electronic equipment and readable medium
CN115661238B (en) Method and device for generating travelable region, electronic equipment and computer readable medium
EP4224361A1 (en) Lane line detection method and apparatus
CN116563840B (en) Scene text detection and recognition method based on weak supervision cross-mode contrast learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40040453

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant