CN116912790A - Lane line detection method and device - Google Patents

Lane line detection method and device Download PDF

Info

Publication number
CN116912790A
CN116912790A CN202310726339.9A CN202310726339A CN116912790A CN 116912790 A CN116912790 A CN 116912790A CN 202310726339 A CN202310726339 A CN 202310726339A CN 116912790 A CN116912790 A CN 116912790A
Authority
CN
China
Prior art keywords
initial
lane line
key point
key points
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310726339.9A
Other languages
Chinese (zh)
Inventor
张重鹏
多好学
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Huanyu Zhixing Technology Co ltd
Original Assignee
Wuhan Huanyu Zhixing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Huanyu Zhixing Technology Co ltd filed Critical Wuhan Huanyu Zhixing Technology Co ltd
Priority to CN202310726339.9A priority Critical patent/CN116912790A/en
Publication of CN116912790A publication Critical patent/CN116912790A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a lane line detection method and a lane line detection device, comprising the following steps: acquiring an initial image, and performing low-computation-power detection on initial key points of the initial image to obtain feature dimension information of the initial key points; the characteristic dimension information comprises a preset number of detected lane lines and initial key points corresponding to each lane line; screening the initial key points according to the characteristic dimension information to obtain effective key points corresponding to each lane line; and respectively processing the effective key points of each lane line according to a curve fitting method to obtain a target lane line corresponding to each lane line. According to the method, the initial image is subjected to low-computation-force detection, other complex operators are not needed for calculation, the calculation complexity of lane line detection is reduced, the data processing speed is improved, the initial key points are processed through a curve fitting method, and the technical problem of lane line detection based on low computation force is solved.

Description

Lane line detection method and device
Technical Field
The application relates to the technical field of lane line detection, in particular to a lane line detection method and device.
Background
With the rapid development of deep learning technology, a great deal of research is applied to lane line detection tasks. Current lane line detection techniques are broadly divided into segmentation-based methods, detection-based methods, keypoint-based methods, and polynomial regression-based methods.
The existing method for detecting the lane lines comprises the following steps: (1) segmentation-based method: pixel-by-pixel classification is adopted, but is affected by locality and has poor performance under the condition of shielding or extreme illumination; and example differentiation is difficult, requires complex post-processing operations, and is slow. (2) detection-based methods: the method has large calculation amount, and the requirement of a predefined anchor frame leads to inflexibility, so that the method is difficult to adapt to various complicated lane line types. (3) a keypoint-based method: the stacked hourglass networks are used for predicting key point positions and feature embedding, different lane examples are clustered according to similarity among feature embedding, and although the stacked hourglass networks can be directly cut to reduce network parameters and are directly used without retraining, most low-computation embedded platforms do not support the same. (4) polynomial regression-based method: direct learning predicts polynomial coefficients with a simple fully connected layer, although fast, with certain bottlenecks in accuracy due to data set imbalance and sensitivity to predicted parameters.
Therefore, there is an urgent need to provide a lane line detection method and device, which solve the technical problems that in the prior art, the calculated amount of lane line detection is large, the detection speed is slow, and the lane line cannot be detected based on a low-calculation-force embedded platform.
Disclosure of Invention
In view of the foregoing, it is necessary to provide a lane line detection method and device for solving the technical problems that the detection speed is slow and the lane line cannot be detected based on a low-computation-force embedded platform because the calculated amount of the lane line detection is large in the prior art.
In one aspect, the present application provides a lane line detection method, including:
acquiring an initial image, and performing low-computation-power detection on initial key points of the initial image to obtain feature dimension information of the initial key points; the characteristic dimension information comprises a preset number of detected lane lines and initial key points corresponding to each lane line;
screening the initial key points according to the characteristic dimension information to obtain effective key points corresponding to each lane line;
and respectively processing the effective key points of each lane line according to a curve fitting method to obtain a target lane line corresponding to each lane line.
In some possible implementations, the performing low-computation-force detection on the initial key point of the initial image to obtain feature dimension information of the initial key point includes:
detecting initial key points of the initial image with low calculation force to obtain initial characteristic data of the initial key points;
performing reverse processing on the initial characteristic data according to the deconvolution layer to obtain an initial characteristic image;
and carrying out convolution dimension reduction processing on the initial feature image to obtain feature dimension information of the initial key point.
In some possible implementations, the curve fitting method includes:
performing similarity calculation on the effective key points on the lane lines to obtain a key point set of the lane lines;
and performing curve fitting on the key point set of the lane line to obtain a target lane line of the lane line.
In some possible implementations, the feature dimension information includes a confidence feature image;
the convolution dimension reduction processing is performed on the initial feature image to obtain feature dimension information of the initial key point, including:
establishing a pixel coordinate system according to the initial characteristic image;
establishing a grid on the pixel coordinate system according to the output dimension of the initial image;
according to the grids, initial coordinates corresponding to each initial key point on the pixel coordinate system are respectively determined;
determining the confidence coefficient corresponding to each initial key point according to the initial coordinates of each initial key point;
and outputting a confidence characteristic image comprising the confidence corresponding to each initial key point according to the initial characteristic image.
In some possible implementations, the feature dimension information includes an offset feature image;
the convolution dimension reduction processing is performed on the initial feature image to obtain feature dimension information of the initial key point, including:
calculating the positions of the initial key points on the initial feature image according to an offset function to obtain a horizontal axis offset and a vertical axis offset corresponding to each initial key point;
and outputting a transverse axis offset characteristic image comprising the transverse axis offset corresponding to each initial key point and a longitudinal axis offset characteristic image comprising the longitudinal axis offset according to the initial characteristic image.
In some possible implementations, the feature dimension information includes a high-dimensional feature image;
the convolution dimension reduction processing is performed on the initial feature image to obtain feature dimension information of the initial key point, including:
constructing a similarity matrix according to the number of the initial key points on the initial characteristic image;
according to the feature information between every two initial key points, calculating the similarity of each initial key point corresponding to other initial key points;
determining the same lane line from the initial key points with the similarity smaller than a similarity threshold value to obtain a preset number of lane lines;
and outputting a high-dimensional characteristic image comprising the preset number of lane lines according to the initial characteristic image.
In some possible implementations, the screening the initial key points according to the feature dimension information to obtain valid key points corresponding to each lane line includes:
according to the confidence characteristic image, determining initial key points with the confidence degree larger than or equal to a confidence degree threshold value as first key points, and obtaining initial coordinates corresponding to each first key point;
calculating according to the initial coordinates corresponding to each first key point, the transverse axis offset in the transverse axis offset characteristic image and the longitudinal axis offset in the longitudinal axis offset characteristic image, and determining the accurate position corresponding to each first key point;
and determining a second key point corresponding to each lane line in the preset number of lane lines according to the accurate position corresponding to each first key point on the high-dimensional characteristic image, and determining the second key point corresponding to each lane line as an effective key point corresponding to each lane line.
In some possible implementations, the performing similarity calculation on the valid keypoints on the lane line to obtain a set of keypoints on the lane line includes:
randomly selecting a preset number of third key points from the effective key points;
respectively calculating the distance between each third key point and other third key points to obtain the distance value of each third key point;
and determining the third key point with the distance value larger than or equal to a distance threshold value as a fourth key point, and obtaining a key point set of the lane line according to the fourth key point.
In some possible implementations, the performing curve fitting on the set of key points of the lane line to obtain a target lane line of the lane line includes:
performing cubic curve fitting on the fourth key points of the lane line according to the key point set to obtain continuous values corresponding to each fourth key point;
and replacing each fourth key point with the corresponding continuous value to obtain a target lane line of the lane line.
On the other hand, the application also provides a lane line detection device, which comprises:
the key point detection module is used for acquiring an initial image, and detecting initial key points of the initial image with low calculation force to obtain characteristic dimension information of the initial key points; the characteristic dimension information comprises a preset number of detected lane lines and initial key points corresponding to each lane line;
the key point screening module is used for screening the initial key points according to the characteristic dimension information to obtain effective key points corresponding to each lane line;
the lane line fitting module is used for respectively processing the effective key points of each lane line according to a curve fitting method to obtain a target lane line corresponding to each lane line.
The beneficial effects of adopting the embodiment are as follows: according to the lane line detection method provided by the application, an initial image is obtained, and low-calculation-force detection is carried out on initial key points of the initial image to obtain characteristic dimension information of the initial key points; the characteristic dimension information comprises a preset number of detected lane lines and initial key points corresponding to each lane line; screening the initial key points according to the characteristic dimension information to obtain effective key points corresponding to each lane line; and respectively processing the effective key points of each lane line according to a curve fitting method to obtain a target lane line corresponding to each lane line. According to the method, the initial image is subjected to low-computation-force detection, other complex operators are not needed for calculation, the calculation complexity of lane line detection is reduced, the data processing speed is improved, the initial key points are processed through a curve fitting method, and the technical problem of lane line detection based on low computation force is solved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following description will briefly explain the drawings needed in the description of the embodiments, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an embodiment of a lane line detection method according to the present application;
FIG. 2 is a schematic diagram of a key point screening embodiment according to the present application;
FIG. 3 is a schematic view of an embodiment of a curve-fitted lane line according to the present application;
FIG. 4 is a schematic structural diagram of an embodiment of a lane line detecting apparatus according to the present application;
fig. 5 is a schematic structural diagram of an embodiment of an electronic device according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor systems and/or microcontroller systems.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The embodiment of the application provides a lane line detection method and a lane line detection device, which are respectively described below.
Fig. 1 is a flow chart of an embodiment of a lane line detection method according to the present application, where, as shown in fig. 1, the lane line detection method includes:
s101, acquiring an initial image, and performing low-computation-power detection on initial key points of the initial image to obtain feature dimension information of the initial key points; the characteristic dimension information comprises a preset number of detected lane lines and initial key points corresponding to each lane line;
s102, screening the initial key points according to the characteristic dimension information to obtain effective key points corresponding to each lane line;
s103, respectively processing the effective key points of each lane line according to a curve fitting method to obtain a target lane line corresponding to each lane line.
Compared with the prior art, the lane line detection method provided by the application has the advantages that the initial image is obtained, the low-calculation-force detection is carried out on the initial key points of the initial image, and the characteristic dimension information of the initial key points is obtained; the characteristic dimension information comprises a preset number of detected lane lines and initial key points corresponding to each lane line; screening the initial key points according to the characteristic dimension information to obtain effective key points corresponding to each lane line; and respectively processing the effective key points of each lane line according to a curve fitting method to obtain a target lane line corresponding to each lane line. According to the method, the initial image is subjected to low-computation-force detection, other complex operators are not needed for calculation, the calculation complexity of lane line detection is reduced, the data processing speed is improved, the initial key points are processed through a curve fitting method, and the technical problem of lane line detection based on low computation force is solved.
It should be noted that the application can be applied to a low-computation-force embedded platform, adopts a lane line detection idea based on key points, and uses light components such as a smaller main network and fewer multiplication accumulation operations, wherein the main network can be a common convolution network. Common operators such as convolution supported by a low-computation-force embedded platform are used, the reasoning speed is greatly improved by reducing the computation complexity, and the light-weight network architecture is not easy to be over-fitted.
In the embodiment of the present application, the process of acquiring the image may be acquired by a camera of the vehicle or may be acquired from another place, and the specific process of acquiring the image may be set according to the actual situation, which is not limited herein.
In some embodiments of the present application, step S101 includes:
detecting initial key points of the initial image with low calculation force to obtain initial characteristic data of the initial key points;
performing reverse processing on the initial characteristic data according to the deconvolution layer to obtain an initial characteristic image;
and carrying out convolution dimension reduction processing on the initial feature image to obtain feature dimension information of the initial key point.
It should be noted that, the low-computation-force detection may be a common convolution network process, the reverse process may continuously reduce an error between a true value and a predicted value through a backward propagation calculation of deep learning, an initial image may be input to a common convolution network, the common convolution network may convert the initial image into initial feature data, and then input the initial feature data to a deconvolution layer, and the deconvolution layer may convert the initial image into an initial feature image sampled by 1/8 of downsampling through three convolution layers after further extracting features, and a specific conversion size may be set according to an actual situation.
In a specific embodiment of the present application, a common convolution network may be "res net-28" obtained after deleting the last six convolution layers of "res net-34", and the speed of data processing is increased by reducing the complexity of calculation, and the deconvolution layer may be a deconvolution layer with a size of 4x4 and used for capturing more feature dimension information.
In some embodiments of the application, the feature dimension information comprises a confidence feature image;
the convolution dimension reduction processing is performed on the initial feature image to obtain feature dimension information of the initial key point, including:
establishing a pixel coordinate system according to the initial characteristic image;
establishing a grid on the pixel coordinate system according to the output dimension of the initial image;
according to the grids, initial coordinates corresponding to each initial key point on the pixel coordinate system are respectively determined;
determining the confidence coefficient corresponding to each initial key point according to the initial coordinates of each initial key point;
and outputting a confidence characteristic image comprising the confidence corresponding to each initial key point according to the initial characteristic image.
It should be noted that, a pixel coordinate system may be established with the upper left corner of the initial feature image as the origin, then a grid may be established on the pixel coordinate system according to the output dimension of the initial feature image, for example, the output dimension is 32×64, then a grid of 32×64 is established, according to the position of each initial key point, the initial coordinate corresponding to each initial key point on the pixel coordinate system is determined, then the Confidence coefficient corresponding to each initial key point is determined by calculating each initial key point through a Confidence coefficient loss function, so that a Confidence feature image may be output, for example, 50 initial key points may be output, and then a Confidence coefficient corresponding to 50 initial key points may be obtained, and on the Confidence feature image, the position of each initial key point represents the Confidence coefficient corresponding to each initial key point.
In some embodiments of the application, the feature dimension information comprises an offset feature image;
the convolution dimension reduction processing is performed on the initial feature image to obtain feature dimension information of the initial key point, including:
calculating the positions of the initial key points on the initial feature image according to an offset function to obtain a horizontal axis offset and a vertical axis offset corresponding to each initial key point;
and outputting a transverse axis offset characteristic image comprising the transverse axis offset corresponding to each initial key point and a longitudinal axis offset characteristic image comprising the longitudinal axis offset according to the initial characteristic image.
The initial feature image may be processed by an Offset loss function to obtain a horizontal axis Offset and a vertical axis Offset of each initial key point, so that the horizontal axis Offset feature image and the vertical axis Offset feature image may be output.
In some embodiments of the application, the feature dimension information comprises a high-dimensional feature image;
the convolution dimension reduction processing is performed on the initial feature image to obtain feature dimension information of the initial key point, including:
constructing a similarity matrix according to the number of the initial key points on the initial characteristic image;
according to the feature information between every two initial key points, calculating the similarity of each initial key point corresponding to other initial key points;
determining the same lane line from the initial key points with the similarity smaller than a similarity threshold value to obtain a preset number of lane lines;
and outputting a high-dimensional characteristic image comprising the preset number of lane lines according to the initial characteristic image.
It should be noted that, the initial key points on the initial feature image may be processed through an Instance feature loss function embedded by Instance to obtain the similarity of each initial key point, and the processing procedure of the Instance feature loss function embedded by Instance includes constructing a similarity matrix according to the number of the initial key points on the initial feature image, for example, 50 initial key points, and constructing a similarity matrix of 50×50; then, according to the feature information between every two initial key points, calculating the similarity of each initial key point corresponding to other initial key points, for example, the similarity of the key point 1 and the key point 2, the similarity of the key point 1 and the key point 3, the similarity of the key point 1 and the key point 50, the similarity of the key point 2 and the key point 3, the similarity of the key point 2 and the key point 50, and the like; and then determining the same lane line by using the initial key points with the similarity smaller than the similarity threshold, for example, the key point 1 is the same lane line relative to the key point 2, the key point 3, the key point 4 and the key point 5, if the similarity of the key point 1 to the key point 3, the key point 4 and the key point 5 is smaller than the similarity threshold, the key point 1, the key point 2 to the key point 3 is the same lane line, the key point 1, the key point 2 to the key point 4 is the same lane line, the key point 1, the key point 2 to the key point 5 is the same lane line, if the similarity of the key point 3 to the key point 4 and the key point 5 is smaller than the similarity threshold, the key point 1, the key point 2, the key point 3 and the key point 4 are the same lane line, the key point 1, the key point 2, the key point 3 and the key point 5 are the same lane line, if the similarity of the key point 4 to the key point 5 is greater than or equal to the similarity threshold, the key point 1, the key point 2 and the key point 4 or the lane line is the same lane line, the lane line is the same as the lane line is the point 1, the point 2 or the lane line is the same as the point 2, the point 2 or the point 2 is the point 5.
In some embodiments of the present application, step S102 includes:
according to the confidence characteristic image, determining initial key points with the confidence degree larger than or equal to a confidence degree threshold value as first key points, and obtaining initial coordinates corresponding to each first key point;
calculating according to the initial coordinates corresponding to each first key point, the transverse axis offset in the transverse axis offset characteristic image and the longitudinal axis offset in the longitudinal axis offset characteristic image, and determining the accurate position corresponding to each first key point;
and determining a second key point corresponding to each lane line in the preset number of lane lines according to the accurate position corresponding to each first key point on the high-dimensional characteristic image, and determining the second key point corresponding to each lane line as an effective key point corresponding to each lane line.
It should be noted that, the Confidence loss function outputs a feature map, which is a Confidence feature image; the Offset loss function outputs two feature images, namely a horizontal axis Offset feature image and a vertical axis Offset feature image; the Instance characteristic loss function is embedded in the Instance, and the output of the Instance characteristic loss function is that one channel is 4 high-dimensional characteristic images; initial key points with the confidence coefficient larger than or equal to a confidence coefficient threshold value in the confidence coefficient characteristic image can be determined to be first key points, and the initial key points with the confidence coefficient smaller than the confidence coefficient threshold value are discarded; the accurate position corresponding to each first key point can be obtained by adding the horizontal axis offset and the vertical axis offset to the initial coordinates of the first key point, for example, the key point 1 is (10, 5), the horizontal axis offset is 0.2, the vertical axis offset is 0.1, and the coordinates of the accurate position of the key point 1 are (10.2,5.1); the second key point corresponding to each lane line in the preset number of lane lines can be determined according to the accurate position corresponding to each first key point on the high-dimensional feature image, for example, the lane line 1 comprises a key point 1, a key point 2 and a key point 3, if the key point 4 or the key point 5 is a discarded key point, the key point of the lane line 1 can be determined to comprise the key point 1, the key point 2 and the key point 3, other lane lines can be obtained similarly, as shown in fig. 2, the first key point can be determined from the initial key point points of the initial feature image, the accurate position of each first key point can be determined, then according to the distribution situation of the accurate positions of the first key points, the first key points in the area 1 and the area 2 can form the lane line, the first key point in the area 1 can form the lane line 1, the first key point in the area 1 can form the second key point of the lane line 1, and the first key point in the area 2 can form the second key point of the lane line 2.
In an embodiment of the present application, in the present application, the convolution dimensionality reduction of the Confidence loss function, the Offset loss function, and the Instance-embedded feature loss function output three feature maps as 1/8 downsampled single-scale outputs, respectively.
In some embodiments of the application, the curve fitting method comprises:
performing similarity calculation on the effective key points on the lane lines to obtain a key point set of the lane lines;
and performing curve fitting on the key point set of the lane line to obtain a target lane line of the lane line.
After the effective key points of each lane line are obtained, curve fitting can be performed on the effective key points corresponding to each lane line through a curve fitting method to obtain target lane lines corresponding to each lane line, and the curve fitting process of each lane line is the same and cannot affect each other.
In some embodiments of the present application, the performing similarity calculation on the valid keypoints on the lane line to obtain a set of keypoints on the lane line includes:
randomly selecting a preset number of third key points from the effective key points;
respectively calculating the distance between each third key point and other third key points to obtain the distance value of each third key point;
and determining the third key point with the distance value larger than or equal to a distance threshold value as a fourth key point, and obtaining a key point set of the lane line according to the fourth key point.
In the embodiment of the present application, the lane line 2 includes the key points 20 to 40, and the preset number of third key points may be 3, so that 3 key points may be randomly obtained, for example: and calculating the distance values of the key points 20, 21 and 22, respectively, calculating the distance values of the key points 20, 21 and 22, and the distance values of the key points 21, 20 and 22, and the distance values of the key points 22, 20 and 21, to obtain the distance values of the key points 20, 21 and 22, and the distance values of the key points 21 and 22 are greater than or equal to a distance threshold, determining the key points 21 and 22 as a fourth key point, determining the key points 21 and 22 into a key point set of the lane line 2, and then randomly acquiring 3 key points again until all operations including the key points 20-40 in the lane line 2 are completed, to obtain the key point set of the lane line 2.
In some embodiments of the present application, the performing curve fitting on the set of key points of the lane line to obtain a target lane line of the lane line includes:
performing cubic curve fitting on the fourth key points of the lane line according to the key point set to obtain continuous values corresponding to each fourth key point;
and replacing each fourth key point with the corresponding continuous value to obtain a target lane line of the lane line.
It should be noted that, the straight line obtained by the primary curve fitting, the secondary curve fitting, the tertiary curve fitting, the curve fitting, and the shape of the lane line are more consistent, so that a continuous value corresponding to each fourth key point can be obtained, and then the continuous value is used to replace the discrete fourth key point, so as to obtain a smooth lane line, as shown in fig. 3, a lane line 1 and a lane line 2 can be obtained; the specific process of cubic curve fitting may be set according to practical situations, and embodiments of the present application are not limited herein.
According to the method, the low-calculation-force detection is carried out on the initial image, so that the calculation complexity of the lane line detection is reduced, the data processing speed is improved, the initial key points are processed through the curve fitting method, and the technical problem of the lane line detection based on the low-calculation-force is solved.
In order to better implement the lane line detection method in the embodiment of the present application, correspondingly, the embodiment of the present application further provides a lane line detection device, as shown in fig. 4, on the basis of the lane line detection method, where the lane line detection device includes:
the key point detection module 401 is configured to obtain an initial image, and perform low-computation-force detection on an initial key point of the initial image to obtain feature dimension information of the initial key point; the characteristic dimension information comprises a preset number of detected lane lines and initial key points corresponding to each lane line;
the key point screening module 402 is configured to screen the initial key points according to the feature dimension information, so as to obtain effective key points corresponding to each lane line;
the lane line fitting module 403 is configured to process the valid key points of each lane line according to a curve fitting method, so as to obtain a target lane line corresponding to each lane line.
The lane line detection device provided in the above embodiment may implement the technical solution described in the above lane line detection method embodiment, and the specific implementation principle of each module or unit may refer to the corresponding content in the above lane line detection method embodiment, which is not described herein again.
As shown in fig. 5, the present application further provides an electronic device 500 accordingly. The electronic device 500 comprises a processor 501, a memory 502 and a display 503. Fig. 5 shows only some of the components of the electronic device 500, but it should be understood that not all of the illustrated components are required to be implemented and that more or fewer components may be implemented instead.
The memory 502 may be an internal storage unit of the electronic device 500 in some embodiments, such as a hard disk or memory of the electronic device 500. The memory 502 may also be an external storage device of the electronic device 500 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like, which are provided on the electronic device 500.
Further, the memory 502 may also include both internal storage units and external storage devices of the electronic device 500. The memory 502 is used for storing application software and various types of data for installing the electronic device 500.
The processor 501 may be a central processing unit (Central Processing Unit, CPU), microprocessor or other data processing chip in some embodiments for executing program code or processing data stored in the memory 502, such as the lane line detection method of the present application.
The display 503 may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like in some embodiments. The display 503 is for displaying information at the electronic device 500 and for displaying a visual user interface. The components 501-503 of the electronic device 500 communicate with each other via a system bus.
In some embodiments of the present application, when the processor 501 executes the lane line detection program in the memory 502, the following steps may be implemented:
acquiring an initial image, and performing low-computation-power detection on initial key points of the initial image to obtain feature dimension information of the initial key points; the characteristic dimension information comprises a preset number of detected lane lines and initial key points corresponding to each lane line;
screening the initial key points according to the characteristic dimension information to obtain effective key points corresponding to each lane line;
and respectively processing the effective key points of each lane line according to a curve fitting method to obtain a target lane line corresponding to each lane line.
It should be understood that: the processor 501 may perform other functions in addition to the above functions when executing the lane line detection program in the memory 502, and in particular, reference may be made to the description of the corresponding method embodiments above.
Further, the type of the electronic device 500 is not particularly limited, and the electronic device 500 may be a portable electronic device such as a mobile phone, a tablet computer, a personal digital assistant (personal digitalassistant, PDA), a wearable device, a laptop (laptop), etc. Exemplary embodiments of portable electronic devices include, but are not limited to, portable electronic devices that carry IOS, android, microsoft or other operating systems. The portable electronic device described above may also be other portable electronic devices, such as a laptop computer (laptop) or the like having a touch-sensitive surface, e.g. a touch panel. It should also be appreciated that in other embodiments of the application, electronic device 500 may not be a portable electronic device, but rather a desktop computer having a touch-sensitive surface (e.g., a touch panel).
Correspondingly, the embodiment of the application also provides a computer readable storage medium, and the computer readable storage medium is used for storing a computer readable program or instruction, and when the program or instruction is executed by a processor, the steps or functions of the lane line detection method provided by the above method embodiments can be realized.
Those skilled in the art will appreciate that all or part of the flow of the methods of the embodiments described above may be accomplished by way of a computer program stored in a computer readable storage medium to instruct related hardware (e.g., a processor, a controller, etc.). The computer readable storage medium is a magnetic disk, an optical disk, a read-only memory or a random access memory.
The above describes the lane line detection method and apparatus provided by the present application in detail, and specific examples are applied to illustrate the principles and embodiments of the present application, and the above description of the examples is only for helping to understand the method and core idea of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, the present description should not be construed as limiting the present application.

Claims (10)

1. A lane line detection method, characterized by comprising:
acquiring an initial image, and performing low-computation-power detection on initial key points of the initial image to obtain feature dimension information of the initial key points; the characteristic dimension information comprises a preset number of detected lane lines and initial key points corresponding to each lane line;
screening the initial key points according to the characteristic dimension information to obtain effective key points corresponding to each lane line;
and respectively processing the effective key points of each lane line according to a curve fitting method to obtain a target lane line corresponding to each lane line.
2. The lane line detection method according to claim 1, wherein the performing low-computation-force detection on the initial key point of the initial image to obtain feature dimension information of the initial key point includes:
detecting initial key points of the initial image with low calculation force to obtain initial characteristic data of the initial key points;
performing reverse processing on the initial characteristic data according to the deconvolution layer to obtain an initial characteristic image;
and carrying out convolution dimension reduction processing on the initial feature image to obtain feature dimension information of the initial key point.
3. The lane line detection method according to claim 1, wherein the curve fitting method comprises:
performing similarity calculation on the effective key points on the lane lines to obtain a key point set of the lane lines;
and performing curve fitting on the key point set of the lane line to obtain a target lane line of the lane line.
4. The lane-line detection method according to claim 2, wherein the feature dimension information includes a confidence feature image;
the convolution dimension reduction processing is performed on the initial feature image to obtain feature dimension information of the initial key point, including:
establishing a pixel coordinate system according to the initial characteristic image;
establishing a grid on the pixel coordinate system according to the output dimension of the initial image;
according to the grids, initial coordinates corresponding to each initial key point on the pixel coordinate system are respectively determined;
determining the confidence coefficient corresponding to each initial key point according to the initial coordinates of each initial key point;
and outputting a confidence characteristic image comprising the confidence corresponding to each initial key point according to the initial characteristic image.
5. The lane line detection method according to claim 4, wherein the feature dimension information includes an offset feature image;
the convolution dimension reduction processing is performed on the initial feature image to obtain feature dimension information of the initial key point, including:
calculating the positions of the initial key points on the initial feature image according to an offset function to obtain a horizontal axis offset and a vertical axis offset corresponding to each initial key point;
and outputting a transverse axis offset characteristic image comprising the transverse axis offset corresponding to each initial key point and a longitudinal axis offset characteristic image comprising the longitudinal axis offset according to the initial characteristic image.
6. The lane line detection method according to claim 5, wherein the feature dimension information includes a high-dimensional feature image;
the convolution dimension reduction processing is performed on the initial feature image to obtain feature dimension information of the initial key point, including:
constructing a similarity matrix according to the number of the initial key points on the initial characteristic image;
according to the feature information between every two initial key points, calculating the similarity of each initial key point corresponding to other initial key points;
determining the same lane line from the initial key points with the similarity smaller than a similarity threshold value to obtain a preset number of lane lines;
and outputting a high-dimensional characteristic image comprising the preset number of lane lines according to the initial characteristic image.
7. The lane line detection method according to claim 6, wherein the screening the initial key points according to the feature dimension information to obtain the valid key points corresponding to each lane line includes:
according to the confidence characteristic image, determining initial key points with the confidence degree larger than or equal to a confidence degree threshold value as first key points, and obtaining initial coordinates corresponding to each first key point;
calculating according to the initial coordinates corresponding to each first key point, the transverse axis offset in the transverse axis offset characteristic image and the longitudinal axis offset in the longitudinal axis offset characteristic image, and determining the accurate position corresponding to each first key point;
and determining a second key point corresponding to each lane line in the preset number of lane lines according to the accurate position corresponding to each first key point on the high-dimensional characteristic image, and determining the second key point corresponding to each lane line as an effective key point corresponding to each lane line.
8. The lane line detection method according to claim 3, wherein the performing similarity calculation on the valid key points on the lane line to obtain the key point set of the lane line includes:
randomly selecting a preset number of third key points from the effective key points;
respectively calculating the distance between each third key point and other third key points to obtain the distance value of each third key point;
and determining the third key point with the distance value larger than or equal to a distance threshold value as a fourth key point, and obtaining a key point set of the lane line according to the fourth key point.
9. The lane line detection method according to claim 8, wherein the curve fitting the set of key points of the lane line to obtain a target lane line of the lane line includes:
performing cubic curve fitting on the fourth key points of the lane line according to the key point set to obtain continuous values corresponding to each fourth key point;
and replacing each fourth key point with the corresponding continuous value to obtain a target lane line of the lane line.
10. A lane line detection apparatus, comprising:
the key point detection module is used for acquiring an initial image, and detecting initial key points of the initial image with low calculation force to obtain characteristic dimension information of the initial key points; the characteristic dimension information comprises a preset number of detected lane lines and initial key points corresponding to each lane line;
the key point screening module is used for screening the initial key points according to the characteristic dimension information to obtain effective key points corresponding to each lane line;
the lane line fitting module is used for respectively processing the effective key points of each lane line according to a curve fitting method to obtain a target lane line corresponding to each lane line.
CN202310726339.9A 2023-06-16 2023-06-16 Lane line detection method and device Pending CN116912790A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310726339.9A CN116912790A (en) 2023-06-16 2023-06-16 Lane line detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310726339.9A CN116912790A (en) 2023-06-16 2023-06-16 Lane line detection method and device

Publications (1)

Publication Number Publication Date
CN116912790A true CN116912790A (en) 2023-10-20

Family

ID=88359158

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310726339.9A Pending CN116912790A (en) 2023-06-16 2023-06-16 Lane line detection method and device

Country Status (1)

Country Link
CN (1) CN116912790A (en)

Similar Documents

Publication Publication Date Title
US10346996B2 (en) Image depth inference from semantic labels
CN111209978B (en) Three-dimensional visual repositioning method and device, computing equipment and storage medium
CN112668588B (en) Parking space information generation method, device, equipment and computer readable medium
CN110222703B (en) Image contour recognition method, device, equipment and medium
CN112631947B (en) Test control method and device for application program, electronic equipment and storage medium
EP3916634A2 (en) Text recognition method and device, and electronic device
CN112258512A (en) Point cloud segmentation method, device, equipment and storage medium
US20160042246A1 (en) A haar calculation system, an image classification system, associated methods and associated computer program products
CN111402413B (en) Three-dimensional visual positioning method and device, computing equipment and storage medium
Ling et al. Research on gesture recognition based on YOLOv5
EP3958219A2 (en) Method and apparatus for generating a license plate defacement classification model, license plate defacement classification method and apparatus, electronic device, storage medium, and computer program product
CN114639087A (en) Traffic sign detection method and device
CN113793370B (en) Three-dimensional point cloud registration method and device, electronic equipment and readable medium
CN114049488A (en) Multi-dimensional information fusion remote weak and small target detection method and terminal
CN112132015A (en) Detection method, device, medium and electronic equipment for illegal driving posture
CN112541902A (en) Similar area searching method, similar area searching device, electronic equipment and medium
CN115660941B (en) Image moving method and device, electronic equipment and computer readable storage medium
CN114429631B (en) Three-dimensional object detection method, device, equipment and storage medium
CN116912790A (en) Lane line detection method and device
CN113610856B (en) Method and device for training image segmentation model and image segmentation
CN111192312A (en) Depth image acquisition method, device, equipment and medium based on deep learning
CN111382643A (en) Gesture detection method, device, equipment and storage medium
CN114972146A (en) Image fusion method and device based on generation countermeasure type double-channel weight distribution
CN111765892B (en) Positioning method, positioning device, electronic equipment and computer readable storage medium
CN110688511A (en) Fine-grained image retrieval method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination