CN114821528A - Lane line detection method and device, electronic equipment and storage medium - Google Patents

Lane line detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114821528A
CN114821528A CN202210419562.4A CN202210419562A CN114821528A CN 114821528 A CN114821528 A CN 114821528A CN 202210419562 A CN202210419562 A CN 202210419562A CN 114821528 A CN114821528 A CN 114821528A
Authority
CN
China
Prior art keywords
image
lane line
detected
detection model
lane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210419562.4A
Other languages
Chinese (zh)
Inventor
王维颂
尚广利
张伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Tsingtech Microvision Electronic Technology Co ltd
Original Assignee
Suzhou Tsingtech Microvision Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Tsingtech Microvision Electronic Technology Co ltd filed Critical Suzhou Tsingtech Microvision Electronic Technology Co ltd
Priority to CN202210419562.4A priority Critical patent/CN114821528A/en
Publication of CN114821528A publication Critical patent/CN114821528A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a lane line detection method and device, electronic equipment and a storage medium. The method comprises the following steps: acquiring an image to be detected, inputting the image to be detected into a preset detection model, and extracting visual features in the image to be detected by using a backbone network module of the preset detection model to obtain image features of the image to be detected; carrying out multi-layer fusion on the image characteristics of an image to be detected by using a layer link module of a preset detection model to obtain fusion characteristics of the image to be detected; performing spatial mapping on the fusion characteristics of the image to be detected by using a characteristic mapping module of a preset detection model to obtain the lane line characteristics of the image to be detected; and determining the lane line information in the image to be detected according to the lane line characteristics of the image to be detected. In other words, in the embodiment of the invention, the deep learning neural network module is used for extracting and spatially mapping the image features, so that the parameter number of the model is reduced, the calculated amount in the detection process of the model is reduced, the detection speed is increased, and the detection accuracy is improved.

Description

Lane line detection method and device, electronic equipment and storage medium
Technical Field
Embodiments of the present invention relate to computer technologies, and in particular, to a lane line detection method and apparatus, an electronic device, and a storage medium.
Background
With the common application of the auxiliary automatic driving system, the environmental information in the driving process of the automobile is acquired by utilizing the sensor arranged on the automobile, and the danger is prevented by combining with the navigation map data, so that the comfort and the safety of automobile driving are effectively improved. The lane line detection method used by the auxiliary automatic driving system in the prior art is generally based on machine learning, although the lane line detection method based on machine learning is high in speed, the recognition error rate is high, meanwhile, with the development of deep learning, the lane line detection method based on deep learning starts to be used, and due to the fact that the method has the characteristic of high detection precision, the method needs a large amount of calculation, and is not suitable for equipment with a low processor.
Disclosure of Invention
The invention provides a lane line detection method, a lane line detection device, an electronic device and a storage medium, which are used for reducing the detection precision and the calculation amount and reducing the requirement on a processor in the lane line detection process.
In a first aspect, an embodiment of the present invention provides a lane line detection method, where the method includes:
acquiring an image to be detected, inputting the image to be detected into a preset detection model, and extracting visual features in the image to be detected by using a backbone network module of the preset detection model to obtain image features of the image to be detected;
carrying out multi-layer fusion on the image characteristics of the image to be detected by utilizing a layer link module of the preset detection model to obtain fusion characteristics of the image to be detected;
performing space mapping on the fusion characteristics of the image to be detected by using a characteristic mapping module of the preset detection model to obtain the lane line characteristics of the image to be detected;
and determining lane line information in the image to be detected according to the lane line characteristics of the image to be detected, wherein the lane line information comprises an equation, a color and a line type of a lane line.
Further, the preset detection model is obtained in the following manner:
marking the lane lines in each image in the lane line data set to obtain lane line marking information in each image;
detecting each image by using a training detection model to obtain the detection lane line information of each image;
calculating loss entropy according to the detection lane line information of each image and the lane line marking information of each image, and optimizing the training detection model by using the loss entropy so as to obtain the preset detection model.
Further, labeling the lane line in each image in the lane line data set to obtain lane line label information in each image, including:
and determining the lane line in each image according to the shooting visual angle of each image, and marking the serial number of the lane line and the color and the line type corresponding to the serial number of the lane line from right to left by taking a running vehicle on the lane line as a center.
Further, the backbone network module of the preset detection model is built by a hybrid convolution module ShuffleNet.
Further, determining the lane line information in the image to be detected according to the lane line characteristics of the image to be detected, including:
determining an equation and confidence of a lane line in the image to be detected according to the feature points to be fitted in the lane line features of the image to be detected;
and determining the color and the line type of the lane line in the image to be detected according to the lane line characteristics of the image to be detected.
Further, determining an equation and a confidence coefficient of the lane line in the image to be detected according to the feature points to be fitted in the lane line features of the image to be detected, including:
performing linear fitting on feature points to be fitted in the lane line features of the image to be detected to obtain an equation of each lane line in the image to be detected;
and determining the confidence corresponding to the equation of each lane line according to the feature points to be fitted and the equation of each lane line.
Further, determining the confidence corresponding to the equation of each lane line according to the feature points to be fitted and the equation of each lane line includes:
determining on-line feature points corresponding to the lane lines from the feature points to be fitted according to the equation of each lane line;
and determining the confidence corresponding to the equation of each lane line according to the number of the characteristic points on the line and the data quantity of the characteristic points to be fitted.
In a second aspect, an embodiment of the present invention provides a lane line detection apparatus, including:
the characteristic extraction module is used for acquiring an image to be detected, inputting the image to be detected into a preset detection model, and extracting visual characteristics in the image to be detected by using a main network module of the preset detection model to obtain image characteristics of the image to be detected;
the characteristic fusion module is used for carrying out multi-layer fusion on the image characteristics of the image to be detected by utilizing the layer link module of the preset detection model to obtain the fusion characteristics of the image to be detected;
the spatial mapping module is used for carrying out spatial mapping on the fusion characteristics of the image to be detected by utilizing the characteristic mapping module of the preset detection model to obtain the lane line characteristics of the image to be detected;
and the information determining module is used for determining the lane line information in the image to be detected according to the lane line characteristics of the image to be detected, wherein the lane line information comprises an equation, a color and a line type of a lane line.
In a third aspect, an embodiment of the present invention provides an electronic device, where the electronic device includes:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the lane line detection method of any of claims 1 to 7.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the lane line detection method.
In the embodiment of the invention, the image characteristics of the image to be detected are obtained by acquiring the image to be detected, inputting the image to be detected into a preset detection model and extracting the visual characteristics in the image to be detected by using a backbone network module of the preset detection model; carrying out multi-layer fusion on the image characteristics of an image to be detected by using a layer link module of a preset detection model to obtain fusion characteristics of the image to be detected; performing spatial mapping on the fusion characteristics of the image to be detected by using a characteristic mapping module of a preset detection model to obtain the lane line characteristics of the image to be detected; and determining lane line information in the image to be detected according to the lane line characteristics of the image to be detected, wherein the lane line information comprises an equation, a color and a line type of a lane line. In other words, in the embodiment of the invention, the features in the image to be detected are extracted and spatially mapped through the deep learning neural network module, the parameter number in the model is reduced by using the neural network module, the calculated amount in the detection process is reduced, the detection speed is improved, the processor requirement of the operating equipment is reduced, the lane line information is obtained through the extracted features in the image to be detected, and the accuracy of the lane line information in the image to be detected is improved.
Drawings
Fig. 1 is a schematic flow chart of a lane line detection method according to an embodiment of the present invention;
fig. 2 is another schematic flow chart of a lane line detection method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of lane marking provided by an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a lane line detection apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Fig. 1 is a schematic flow chart of a lane line detection method according to an embodiment of the present invention, which may be implemented by a lane line detection apparatus according to an embodiment of the present invention, and the apparatus may be implemented in a software and/or hardware manner. In a particular embodiment, the apparatus may be integrated in an electronic device, which may be, for example, a server. The following embodiments will be described by taking as an example that the apparatus is integrated in an electronic device, and referring to fig. 1, the method may specifically include the following steps:
s110, acquiring an image to be detected, inputting the image to be detected into a preset detection model, and extracting visual features in the image to be detected by using a backbone network module of the preset detection model to obtain image features of the image to be detected;
for example, the image to be detected may be an image acquisition device in a driving assistance system of a vehicle, or may be an image acquisition device of a roadside unit, the image acquisition device may be a camera, a video recorder, or other devices having an image acquisition function, and the image to be detected may be an image acquired in real time or an image acquired in advance; when the image to be detected is an image acquired in real time, the image to be detected can be the latest image acquired by image acquisition equipment, namely the image to be detected acquired in real time is used for detecting lane line information in the current driving scene of the vehicle; when the image to be detected is a pre-acquired image, the image to be detected can be an image acquired at any time and any place in the acquisition process of the image acquisition equipment, namely, the pre-acquired image is utilized to detect lane line information in a scene at that time. The preset detection model can be a detection model which is built by a plurality of deep learning neural network modules and obtained through training of a lane line data set and is used for extracting lane line features in the image to be detected. The backbone network module of the preset detection model may be a network module formed by a plurality of deeply learned network layers for extracting visual features in an image, such as: the system comprises a high semantic feature layer and a low semantic feature layer, wherein the more the number of network layers is, the better the network effect is, and the greater the difficulty of network training is. The image characteristics of the image to be detected can be image information extracted by a plurality of characteristic layers extracted from the image to be detected by a preset detection model.
In the specific implementation, an image to be detected is acquired according to image acquisition equipment, after the image to be detected is acquired, feature extraction can be performed on the image to be detected, the image to be detected is input into a preset detection model, and image features in the image to be detected are extracted by using the preset detection model. The preset detection model is a network structure built by a backbone network module, a layer link module and a feature mapping model. Inputting an image to be detected into a preset detection model, extracting image information in a plurality of characteristic layers in the image to be detected by using a trunk network model of the preset detection model to obtain image characteristics of the image to be detected, so that the extracted image characteristics of the image to be detected are used for determining lane line characteristics in the image, and further determining lane line information in the image to be detected.
S120, carrying out multi-layer fusion on the image characteristics of the image to be detected by utilizing a layer link module of a preset detection model to obtain fusion characteristics of the image to be detected;
for example, the layer link module of the preset detection model may be a neural network module that fuses features of different scales in image features of the image to be detected, that is, multiple detection results predicted by the trunk network module in the preset detection model are fused to obtain a fused feature of the image to be detected. The fusion features of the images to be detected can be high-efficiency fusion features obtained by fusing a plurality of detection results predicted by a trunk network module in a preset detection model, and can be sequencing results of bottom-layer features refining high-layer features and improving the performance of the detection results of different feature layers by combining the multilayer fusion features in the image features of the images to be detected through correlation among different feature layers.
In the specific implementation, an image to be detected is input into a preset detection model, and image information in a plurality of characteristic layers in the image to be detected is extracted by using a trunk network model of the preset detection model to obtain image characteristics of the image to be detected. The image characteristics of the image to be detected are input into a layer link module in a preset detection model to fuse the multilayer characteristics in the image characteristics of the image to be detected, so that the fusion characteristics of the image to be detected are obtained, mapping of a scene space is conveniently carried out according to the fusion characteristics of the image to be detected, and lane line information in the image to be detected can be more accurately determined.
S130, performing space mapping on the fusion characteristics of the image to be detected by using a characteristic mapping module of a preset detection model to obtain lane line characteristics of the image to be detected;
for example, the feature mapping module of the preset detection model may be a neural network module that performs scene space mapping on image information in the fusion features of the image to be detected, that is, maps the image information extracted into the image to be detected to a lane line identification space, where the size of the lane line identification space may be a space size preset according to actual needs and experimental data, such as: the lane line identification space may be represented in a manner of 3 × 4 × M × N, where 3 represents a three-dimensional feature space formed by lane line positions (i.e., equations of lane lines), colors, and line types, 4 represents the number of lane lines, and M × N represents the size of the output feature size. The lane line feature of the image to be detected may be the credibility of the feature points, the mark information and the related information formed by mapping the image information in the image to be detected to the image information in the lane line recognition space.
In the specific implementation, an image to be detected is input into a preset detection model, and image information in a plurality of characteristic layers in the image to be detected is extracted by using a trunk network model of the preset detection model to obtain image characteristics of the image to be detected. The method comprises the steps of inputting image features of an image to be detected into a layer link module in a preset detection model to fuse multilayer features in the image features of the image to be detected, obtaining fusion features of the image to be detected, inputting the fusion features of the image to be detected into a feature mapping module of the preset detection module, mapping image information extracted into the image to be detected into a lane line identification space, and determining lane line information in the image to be detected by obtaining lane line features of the image to be detected.
S140, determining lane line information in the image to be detected according to the lane line characteristics of the image to be detected, wherein the lane line information comprises an equation, a color and a line type of a lane line.
In a specific implementation, the lane line information in the image to be detected may be information characteristics of the lane line in the image to be detected, and may be an equation, a color, and a line type of the lane line. The equation of the lane line can be used for judging the tortuosity of the lane, the color of the lane line can be used for determining whether the lane line can be subjected to lane change processing in the driving process, the compliance operation of a driver of the vehicle on the current driving lane is determined, and the line type of the lane line can be used for determining whether the lane line is a lane line which is forbidden to pass through, and the compliance operation of the driver of the vehicle on the current driving lane is determined. The lane line information in the image to be detected is analyzed according to the feature points and the mark information in the lane line features of the image to be detected, so that the lane line information in the image to be detected is used by an auxiliary driving system of a vehicle, dangerous events in the driving process of the vehicle are avoided, and the colors and the line types of the lane lines in the lane line information have different confidence degrees corresponding to the colors.
In the embodiment of the invention, the image characteristics of the image to be detected are obtained by acquiring the image to be detected, inputting the image to be detected into a preset detection model and extracting the visual characteristics in the image to be detected by using a backbone network module of the preset detection model; carrying out multi-layer fusion on the image characteristics of an image to be detected by using a layer link module of a preset detection model to obtain fusion characteristics of the image to be detected; performing spatial mapping on the fusion characteristics of the image to be detected by using a characteristic mapping module of a preset detection model to obtain the lane line characteristics of the image to be detected; and determining lane line information in the image to be detected according to the lane line characteristics of the image to be detected, wherein the lane line information comprises an equation, a color and a line type of a lane line. In other words, in the embodiment of the invention, the features in the image to be detected are extracted and spatially mapped through the module of the deep learning network, the parameter quantity in the model is reduced by using the module of the deep learning network, the calculated quantity in the detection process is reduced, the detection speed is improved, the requirement of operating equipment is reduced, the lane line information is obtained through the extracted features in the image to be detected, and the accuracy of the lane line information in the image to be detected is improved.
The lane line detection method provided by the embodiment of the present invention is further described below, and as shown in fig. 2, the method may specifically include the following steps:
s210, acquiring an image to be detected, inputting the image to be detected into a preset detection model, and extracting visual features in the image to be detected by using a mixed convolution module ShuffleNet in a backbone network module of the preset detection model to obtain image features of the image to be detected;
in specific implementation, the preset detection model may be a network structure built by a backbone network module, a layer link module and a feature mapping model. The trunk network module of the preset detection model is built by the hybrid convolution module ShuffleNet, wherein the hybrid convolution module ShuffleNet can be a neural network module which uniformly arranges different convolution channels and performs hybrid convolution according to the channel arrangement sequence, the defect that the grouped convolution characteristics cannot be communicated is avoided, image characteristics can be better extracted, the global information of the image is obtained, the calculated amount and the parameter size in the characteristic extraction process are reduced by outputting characteristic graphs of the same size, and the purpose of reducing the operation of the preset detection model in the characteristic extraction process is achieved. After an image to be detected is obtained, the image to be detected is input into a preset detection model, the visual features in the image to be detected are extracted by using a hybrid convolution module ShuffleNet in a main network module of the model to be detected, the purpose of operation in the feature extraction process of a prediction detection model is reduced, and therefore the requirement of a processor in electronic equipment using the lane line detection method is lowered.
Further, the preset detection model is obtained in the following manner:
marking the lane lines in each image in the lane line data set to obtain lane line marking information in each image;
detecting each image by using a training detection model to obtain the detection lane line information of each image;
and calculating loss entropy according to the detection lane line information of each image and the lane line marking information of each image, and optimally training the detection model by using the loss entropy so as to obtain a preset detection model.
For example, the lane line data set may be an image set including lane lines collected according to a lane line detection target, and the lane lines in each image in the lane line data set are labeled in advance to obtain lane line marking information in each image. The training detection model can be a network model which is set up for the purpose of detecting the lane lines and is used for detecting each image to obtain the detection lane line information of each image. The detected lane line information of each image is calculated by the lane line characteristics of each image extracted by the training detection model.
In the specific implementation, information labeling is performed on the lane lines in each image in the lane line data set in advance, the feature information of the lane lines is marked on each image, and the lane line marking information of each image is obtained, so that each image contains the lane line information. And inputting each marked image into a training detection model for detection to obtain the lane line characteristics in each image, and calculating the detection lane line information of each image according to the lane line characteristics in each image. And calculating the entropy value of the loss function according to the detected lane line information of each image and the lane line marking information of each image. And determining whether the training detection model is converged according to the entropy of the loss function, performing back propagation according to the entropy of the loss function to optimize parameters of the training detection model until the entropy of the loss function is smaller than a preset entropy threshold, and determining that the training detection model is converged to obtain a preset detection model.
In the embodiment of the invention, the lane line data set can be obtained by data preprocessing, wherein the data preprocessing can be redundancy elimination and image quality monitoring. The monitoring of the image quality can be that the definition of the image and the size of the lane line in the image are preset according to actual requirements and experimental data. The method comprises the steps of marking information of a lane line in each image in a lane line data set in advance, cross-verifying lane line marking information in each image in the lane data set, comparing lane line marking information of the same image with lane line marking information of different marking programs in different time periods, and determining the accuracy of the lane line marking information of each image in the lane line data set through the consistency of the lane line marking information so as to input error marking information into a training detection model process for training and increase the training duration.
Further, labeling the lane line in each image in the lane line data set to obtain lane line label information in each image, including:
and determining the lane line in each image according to the shooting visual angle of each image, and marking the serial number of the lane line and the color and the line type corresponding to the serial number of the lane line from right to left by taking the running vehicle on the lane line as the center.
For example, the shooting angle of view of each image may be a shooting angle or a positional relationship between the camera and the lane line of each image, the shooting angle or the positional relationship between the camera and the lane line may be different, and the length and the width of the lane line may be different in each shot image. The lane lines in each image may be lane lines on both sides of a lane on the lane in each image, wherein the lane lines in each image have different numbers due to different lane numbers, and different lane line types are set according to road conditions, wherein the lane line types may be a dotted line and a solid line, and the dotted line type and the solid line type may be divided into a double line and a single line. The running vehicle on the lane line may be a vehicle running on a lane composed of the lane line; the lane line sequence number may be a sequence number marked on each image according to a preset lane line sequencing rule, where the preset lane lower sequencing rule may mark the lane line sequence number from right to left in the left lane and mark the lane line sequence number from left to right in the right lane with a vehicle traveling on the lane line as a center.
In the concrete implementation, the length of the lane line in each image and the position in the image are determined according to the shooting angle of each image, the serial number of the lane line is marked from right to left according to the center of the running vehicle on the lane line, the serial number of the lane line can be marked from right to left by the left lane with the running vehicle on the lane line as the center, the serial number of the lane line can be marked from left to right by the right lane, and the color and the line type corresponding to the serial number of the lane line can be marked according to the color and the line type of the lane line in each image.
Fig. 3 is a schematic diagram of a lane marking principle provided by an embodiment of the present invention, and as shown in fig. 3, a driving vehicle on a lane marking is marked as-1 and-2 in sequence from right to left in a left lane, and the right lane is marked as 1 and 2 in sequence from left to right in a center, and a solid line is marked as a, a dotted line is marked as b, a white lane is marked as w, and a yellow lane is marked as y.
S220, carrying out multi-layer fusion on the image characteristics of the image to be detected by utilizing a layer link module of a preset detection model to obtain fusion characteristics of the image to be detected;
s230, performing space mapping on the fusion characteristics of the image to be detected by using a characteristic mapping module of a preset detection model to obtain the lane line characteristics of the image to be detected;
s240, determining an equation and confidence of a lane line in the image to be detected according to the feature points to be fitted in the lane line features of the image to be detected;
in a specific implementation, the feature points to be fitted in the lane line features of the image to be detected may be image points on each lane line in the image to be detected extracted from the image to be detected, wherein each lane line corresponds to one feature point set to be fitted, and any one feature point set to be fitted is used to calculate an equation for fitting one lane line in the image to be detected. The equation of the lane line in the image to be detected can be a relational equation of all image points on each lane line in the image to be detected, which is fitted according to the position information of the image points on each lane line, and the linear shape and the trend of each lane line of the image to be detected and the position of the lane line in the image to be detected can be simulated. The equation confidence of the lane line in the image to be detected may be a probability that the equation of the lane line in the image to be detected is reliable. The lane line characteristics of the image to be detected can be image information on a lane line in a lane line identification space through a characteristic mapping module, wherein the image information on the lane line comprises characteristic points to be fitted of each lane line of the image to be detected. And performing linear fitting according to the feature points to be fitted in the lane line features of the image to be detected, calculating the equation of the lane line in the image to be detected, and calculating the confidence coefficient of the equation of the lane line by using the feature points to be fitted and the equation of the lane line.
Further, determining an equation and a confidence coefficient of the lane line in the image to be detected according to the feature points to be fitted in the lane line features of the image to be detected, including:
carrying out linear fitting on feature points to be fitted in the lane line features of the image to be detected to obtain an equation of each lane line in the image to be detected;
and determining the confidence corresponding to the equation of each lane line according to the characteristic points to be fitted and the equation of each lane line.
In the specific implementation, according to the position information of the feature points to be fitted in the lane line features of the image to be detected, if the feature points to be fitted are three-dimensional space image information, the feature points to be fitted can be converted into a rectangular coordinate system, and two-dimensional linear fitting is performed according to the converted position information of the feature points to be fitted to obtain an equation of each lane in the image to be detected; and linear fitting can be directly carried out according to the three-dimensional space image information to obtain an equation of each lane in the image to be detected. And determining the number of the feature points to be fitted on the equation of each lane line according to the feature points to be fitted and the equation of each lane line, and determining the confidence coefficient corresponding to the equation of each lane line according to the ratio of the number of the feature points to be fitted on the equation of each lane line to all the feature points to be fitted.
Further, determining the confidence corresponding to the equation of each lane line according to the feature points to be fitted and the equation of each lane line, including:
determining on-line feature points corresponding to the lane lines from the feature points to be fitted according to the equation of each lane line;
and determining the confidence corresponding to the equation of each lane line according to the number of the characteristic points on the line and the data quantity of the characteristic points to be fitted.
In the concrete implementation, the feature points to be fitted are respectively substituted into the corresponding equations of each lane line, the feature points to be fitted which conform to the equations of each lane line are taken as the on-line feature points corresponding to each lane line, and the feature points to be fitted which do not conform to the equations of each lane line are taken as the discrete feature points corresponding to each lane. And counting the number of the characteristic points to be fitted corresponding to the equation of each lane line to obtain the number of the characteristic points to be fitted, and counting the number of the characteristic points to be fitted according with the equation of each lane line to obtain the number of the characteristic points on the line. And dividing the number of the characteristic points on the line by the number of the characteristic points to be fitted to calculate the percentage of the number of the characteristic points on the line in the characteristic points to be fitted, and taking the percentage as the confidence corresponding to the equation of each lane line.
And S250, determining the color and the line type of the lane line in the image to be detected according to the lane line characteristics of the image to be detected.
In the specific implementation, according to the confidence coefficient of each color and the confidence coefficient of each line type corresponding to the feature point to be fitted in the lane line feature of the image to be detected, the feature point to be fitted identifies the space and the line type of the lane line in the feature mapping module, and the lane line information output by the preset detection module contains the confidence coefficient of each color corresponding to the feature point to be fitted. Determining the color and the line type of each lane line corresponding to the feature point to be fitted from the confidence degrees of the colors corresponding to the substitute feature point and the confidence degrees of the line types corresponding to the feature point to be fitted respectively, wherein the colors larger than the preset confidence degree threshold can be used as the color and the line type of each lane line through a preset confidence degree threshold; or selecting the color and the line type corresponding to the highest value from the confidence degrees of all line types corresponding to the feature points to be fitted as the color and the line type of each lane line.
In the embodiment of the invention, the image characteristics of the image to be detected are obtained by acquiring the image to be detected, inputting the image to be detected into a preset detection model and extracting the visual characteristics in the image to be detected by using a backbone network module of the preset detection model; carrying out multi-layer fusion on the image characteristics of an image to be detected by using a layer link module of a preset detection model to obtain fusion characteristics of the image to be detected; performing spatial mapping on the fusion characteristics of the image to be detected by using a characteristic mapping module of a preset detection model to obtain the lane line characteristics of the image to be detected; and determining the lane line information in the image to be detected according to the lane line characteristics of the image to be detected, wherein the lane line information comprises an equation, a color and a line type of the lane line. In other words, in the embodiment of the invention, the features in the image to be detected are extracted and spatially mapped through the module of the deep learning network, the parameter quantity in the model is reduced by using the module of the deep learning network, the calculated quantity in the detection process is reduced, the detection speed is improved, the requirement of operating equipment is reduced, the lane line information is obtained through the extracted features in the image to be detected, and the accuracy of the lane line information in the image to be detected is improved.
Fig. 4 is a schematic structural diagram of a lane line detection apparatus according to an embodiment of the present invention, and as shown in fig. 4, the lane line detection apparatus includes:
the feature extraction module 410 is configured to obtain an image to be detected, input the image to be detected into a preset detection model, and extract visual features in the image to be detected by using a backbone network module of the preset detection model to obtain image features of the image to be detected;
the feature fusion module 420 is configured to perform multi-layer fusion on the image features of the image to be detected by using the layer link module of the preset detection model to obtain fusion features of the image to be detected;
the spatial mapping module 430 is configured to perform spatial mapping on the fusion features of the image to be detected by using the feature mapping module of the preset detection model to obtain lane line features of the image to be detected;
the information determining module 440 is configured to determine lane line information in the image to be detected according to lane line characteristics of the image to be detected, where the lane line information includes an equation, a color, and a line type of a lane line.
In an embodiment, the preset detection model is obtained by the feature extraction module 410 in the following manner:
marking the lane lines in each image in the lane line data set to obtain lane line marking information in each image;
detecting each image by using a training detection model to obtain the detection lane line information of each image;
calculating loss entropy according to the detection lane line information of each image and the lane line marking information of each image, and optimizing the training detection model by using the loss entropy so as to obtain the preset detection model.
In one embodiment, the feature extraction module 410 labels the lane line in each image in the lane line data set to obtain lane line marking information in each image, including:
and determining the lane line in each image according to the shooting visual angle of each image, and marking the serial number of the lane line and the color and the line type corresponding to the serial number of the lane line from right to left by taking a running vehicle on the lane line as a center.
In an embodiment, the feature extraction module 410 is configured to set up a hybrid convolution module, ShuffleNet, as a backbone network module of the preset detection model.
In an embodiment, the determining module 440 determines the lane line information in the image to be detected according to the lane line feature of the image to be detected, including:
determining an equation and confidence of a lane line in the image to be detected according to the feature points to be fitted in the lane line features of the image to be detected;
and determining the color and the line type of the lane line in the image to be detected according to the lane line characteristics of the image to be detected.
In an embodiment, the determining module 440 determines an equation and a confidence of the lane line in the image to be detected according to the feature point to be fitted in the lane line feature of the image to be detected, including:
performing linear fitting on feature points to be fitted in the lane line features of the image to be detected to obtain an equation of each lane line in the image to be detected;
and determining the confidence corresponding to the equation of each lane line according to the feature points to be fitted and the equation of each lane line.
In an embodiment, the determining module 440 determines the confidence corresponding to the equation of each lane line according to the feature points to be fitted and the equation of each lane line, including:
determining on-line feature points corresponding to the lane lines from the feature points to be fitted according to the equation of each lane line;
and determining the confidence corresponding to the equation of each lane line according to the number of the characteristic points on the line and the data quantity of the characteristic points to be fitted.
In the embodiment of the invention, the image characteristics of the image to be detected are obtained by acquiring the image to be detected, inputting the image to be detected into a preset detection model and extracting the visual characteristics in the image to be detected by using a backbone network module of the preset detection model; carrying out multi-layer fusion on the image characteristics of an image to be detected by using a layer link module of a preset detection model to obtain fusion characteristics of the image to be detected; performing spatial mapping on the fusion characteristics of the image to be detected by using a characteristic mapping module of a preset detection model to obtain the lane line characteristics of the image to be detected; and determining lane line information in the image to be detected according to the lane line characteristics of the image to be detected, wherein the lane line information comprises an equation, a color and a line type of a lane line. In other words, in the embodiment of the invention, the features in the image to be detected are extracted and spatially mapped through the module of the deep learning network, the parameter quantity in the model is reduced by using the module of the deep learning network, the calculated quantity in the detection process is reduced, the detection speed is improved, the requirement of operating equipment is reduced, the lane line information is obtained through the extracted features in the image to be detected, and the accuracy of the lane line information in the image to be detected is improved.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. FIG. 5 illustrates a block diagram of an exemplary electronic device 12 suitable for use in implementing embodiments of the present invention. The electronic device 12 shown in fig. 5 is only an example and should not bring any limitation to the function and the scope of use of the embodiment of the present invention.
As shown in FIG. 5, electronic device 12 is embodied in the form of a general purpose computing device. The components of electronic device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30 and/or cache memory 32. The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. C, and commonly referred to as a "hard drive"). Although not shown in FIG. 5, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
Electronic device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with electronic device 12, and/or with any devices (e.g., network card, modem, etc.) that enable electronic device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, the electronic device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via the network adapter 20. As shown, the network adapter 20 communicates with other modules of the electronic device 12 via the bus 18. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing by running a program stored in the system memory 28, for example, to implement the lane line detection method provided by the embodiment of the present invention, and the method includes:
acquiring an image to be detected, inputting the image to be detected into a preset detection model, and extracting visual features in the image to be detected by using a backbone network module of the preset detection model to obtain image features of the image to be detected;
carrying out multi-layer fusion on the image characteristics of the image to be detected by utilizing a layer link module of the preset detection model to obtain fusion characteristics of the image to be detected;
performing space mapping on the fusion characteristics of the image to be detected by using a characteristic mapping module of the preset detection model to obtain the lane line characteristics of the image to be detected;
and determining lane line information in the image to be detected according to the lane line characteristics of the image to be detected, wherein the lane line information comprises an equation, a color and a line type of a lane line.
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the lane line detection method, and the method includes:
acquiring an image to be detected, inputting the image to be detected into a preset detection model, and extracting visual features in the image to be detected by using a backbone network module of the preset detection model to obtain image features of the image to be detected;
carrying out multi-layer fusion on the image characteristics of the image to be detected by utilizing a layer link module of the preset detection model to obtain fusion characteristics of the image to be detected;
performing space mapping on the fusion characteristics of the image to be detected by using a characteristic mapping module of the preset detection model to obtain the lane line characteristics of the image to be detected;
and determining lane line information in the image to be detected according to the lane line characteristics of the image to be detected, wherein the lane line information comprises an equation, a color and a line type of a lane line.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A lane line detection method is characterized by comprising the following steps:
acquiring an image to be detected, inputting the image to be detected into a preset detection model, and extracting visual features in the image to be detected by using a backbone network module of the preset detection model to obtain image features of the image to be detected;
carrying out multi-layer fusion on the image characteristics of the image to be detected by utilizing a layer link module of the preset detection model to obtain fusion characteristics of the image to be detected;
performing space mapping on the fusion characteristics of the image to be detected by using a characteristic mapping module of the preset detection model to obtain the lane line characteristics of the image to be detected;
and determining lane line information in the image to be detected according to the lane line characteristics of the image to be detected, wherein the lane line information comprises an equation, a color and a line type of a lane line.
2. The method of claim 1, wherein the predetermined detection model is obtained as follows:
marking the lane lines in each image in the lane line data set to obtain lane line marking information in each image;
detecting each image by using a training detection model to obtain the detection lane line information of each image;
calculating loss entropy according to the detection lane line information of each image and the lane line marking information of each image, and optimizing the training detection model by using the loss entropy so as to obtain the preset detection model.
3. The method of claim 2, wherein labeling the lane lines in each image in the lane line dataset to obtain lane line labeling information in each image comprises:
and determining the lane line in each image according to the shooting visual angle of each image, and marking the serial number of the lane line and the color and the line type corresponding to the serial number of the lane line from right to left by taking a running vehicle on the lane line as a center.
4. The method according to claim 1, wherein the backbone network module of the preset detection model is built by a hybrid convolution module ShuffleNet.
5. The method according to claim 1, wherein determining lane line information in the image to be detected according to the lane line characteristics of the image to be detected comprises:
determining an equation and confidence of a lane line in the image to be detected according to the feature points to be fitted in the lane line features of the image to be detected;
and determining the color and the line type of the lane line in the image to be detected according to the lane line characteristics of the image to be detected.
6. The method according to claim 5, wherein determining an equation and confidence of a lane line in the image to be detected according to feature points to be fitted in the lane line features of the image to be detected comprises:
performing linear fitting on feature points to be fitted in the lane line features of the image to be detected to obtain an equation of each lane line in the image to be detected;
and determining the confidence corresponding to the equation of each lane line according to the feature points to be fitted and the equation of each lane line.
7. The method of claim 6, wherein determining the confidence corresponding to the equation of each lane line according to the feature points to be fitted and the equation of each lane line comprises:
determining on-line feature points corresponding to the lane lines from the feature points to be fitted according to the equation of each lane line;
and determining the confidence corresponding to the equation of each lane line according to the number of the characteristic points on the line and the data quantity of the characteristic points to be fitted.
8. A lane line detection apparatus, comprising:
the characteristic extraction module is used for acquiring an image to be detected, inputting the image to be detected into a preset detection model, and extracting visual characteristics in the image to be detected by using a main network module of the preset detection model to obtain image characteristics of the image to be detected;
the characteristic fusion module is used for carrying out multi-layer fusion on the image characteristics of the image to be detected by utilizing the layer link module of the preset detection model to obtain the fusion characteristics of the image to be detected;
the spatial mapping module is used for carrying out spatial mapping on the fusion characteristics of the image to be detected by utilizing the characteristic mapping module of the preset detection model to obtain the lane line characteristics of the image to be detected;
and the information determining module is used for determining the lane line information in the image to be detected according to the lane line characteristics of the image to be detected, wherein the lane line information comprises an equation, a color and a line type of a lane line.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the lane line detection method of any of claims 1 to 7.
10. A computer-readable storage medium on which a computer program is stored, the program, when being executed by a processor, implementing the lane line detection method according to any one of claims 1 to 7.
CN202210419562.4A 2022-04-20 2022-04-20 Lane line detection method and device, electronic equipment and storage medium Pending CN114821528A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210419562.4A CN114821528A (en) 2022-04-20 2022-04-20 Lane line detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210419562.4A CN114821528A (en) 2022-04-20 2022-04-20 Lane line detection method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114821528A true CN114821528A (en) 2022-07-29

Family

ID=82506012

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210419562.4A Pending CN114821528A (en) 2022-04-20 2022-04-20 Lane line detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114821528A (en)

Similar Documents

Publication Publication Date Title
CN111626208B (en) Method and device for detecting small objects
CN109145680B (en) Method, device and equipment for acquiring obstacle information and computer storage medium
CN110163176B (en) Lane line change position identification method, device, equipment and medium
EP4152204A1 (en) Lane line detection method, and related apparatus
CN109767637B (en) Method and device for identifying and processing countdown signal lamp
CN109931945B (en) AR navigation method, device, equipment and storage medium
CN108734058B (en) Obstacle type identification method, device, equipment and storage medium
CN112967283B (en) Target identification method, system, equipment and storage medium based on binocular camera
CN109035831A (en) Recognition methods, device, equipment, storage medium and the vehicle of traffic light
CN111931683B (en) Image recognition method, device and computer readable storage medium
CN110647886A (en) Interest point marking method and device, computer equipment and storage medium
CN112200142A (en) Method, device, equipment and storage medium for identifying lane line
CN112507852A (en) Lane line identification method, device, equipment and storage medium
CN116964588A (en) Target detection method, target detection model training method and device
CN114419601A (en) Obstacle information determination method, obstacle information determination device, electronic device, and storage medium
CN109635868B (en) Method and device for determining obstacle type, electronic device and storage medium
CN114820679A (en) Image annotation method and device, electronic equipment and storage medium
CN113537026A (en) Primitive detection method, device, equipment and medium in building plan
CN110555352A (en) interest point identification method, device, server and storage medium
CN109270566B (en) Navigation method, navigation effect testing method, device, equipment and medium
CN116107576A (en) Page component rendering method and device, electronic equipment and vehicle
CN114743395B (en) Signal lamp detection method, device, equipment and medium
CN113591543B (en) Traffic sign recognition method, device, electronic equipment and computer storage medium
CN115578386A (en) Parking image generation method and device, electronic equipment and storage medium
CN114821528A (en) Lane line detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination