CN112966624A - Lane line detection method and device, electronic equipment and storage medium - Google Patents

Lane line detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112966624A
CN112966624A CN202110281549.2A CN202110281549A CN112966624A CN 112966624 A CN112966624 A CN 112966624A CN 202110281549 A CN202110281549 A CN 202110281549A CN 112966624 A CN112966624 A CN 112966624A
Authority
CN
China
Prior art keywords
feature
manual
image
lane
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110281549.2A
Other languages
Chinese (zh)
Inventor
陈建松
王晓东
张天雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhuxian Technology Co Ltd
Original Assignee
Beijing Zhuxian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhuxian Technology Co Ltd filed Critical Beijing Zhuxian Technology Co Ltd
Priority to CN202110281549.2A priority Critical patent/CN112966624A/en
Publication of CN112966624A publication Critical patent/CN112966624A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a lane line detection method, a lane line detection device, electronic equipment and a storage medium, wherein the lane line detection method comprises the following steps: acquiring network characteristics extracted by a neural network of an image to be detected under a specified sampling multiple and manual characteristics extracted by a characteristic extraction operator; fusing the network characteristics and manual characteristics acquired under the specified sampling multiple to acquire fused characteristics; and acquiring the lane lines contained in the image to be detected based on the fusion characteristics. The network characteristics including the core information in the image to be detected are extracted through the neural network based on a small amount of calculation, the manual characteristics extracted through the characteristic extraction operator are supplemented, and the lane line is detected according to the obtained fusion characteristics, so that the calculation amount is reduced, and meanwhile, the accuracy of the detection result can be guaranteed.

Description

Lane line detection method and device, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of data processing, in particular to a lane line detection method and device, electronic equipment and a storage medium.
Background
The lane line is an important basis for local transverse positioning of the vehicle in the road, plays an important role in positioning of the intelligent driving vehicle, and particularly plays an important role in situations of unstable and even lost global positioning information and the like, so that a stable and efficient lane line detection algorithm is very important for the intelligent driving vehicle or a high-grade assistant driving system. The currently adopted lane line detection algorithms are mainly classified into detection algorithms based on traditional artificial feature extraction and detection algorithms based on deep learning.
However, the detection algorithm for manually extracting the features is adopted independently, and the manually designed feature extractor and the manually designed generation rule algorithm are simple and are often difficult to adapt to complex environment change scenes, so that the detection accuracy is low; and the deep learning algorithm is adopted independently, and the calculation speed is low due to the large parameter quantity and the complex calculation of the algorithm, so that the existing lane line detection mode cannot meet the requirements of users.
Disclosure of Invention
The embodiment of the invention provides a lane line detection method and device, electronic equipment and a storage medium. So as to realize the detection of the lane line.
In a first aspect, an embodiment of the present invention provides a lane line detection method, including:
acquiring network characteristics extracted by a neural network of an image to be detected under a specified sampling multiple and manual characteristics extracted by a characteristic extraction operator;
fusing the network characteristics and manual characteristics acquired under the specified sampling multiple to acquire fused characteristics;
and acquiring the lane lines contained in the image to be detected based on the fusion characteristics.
In a second aspect, an embodiment of the present invention provides a lane line detection apparatus, including:
the network characteristic and manual characteristic acquisition module is used for acquiring network characteristics extracted by the image to be detected through a neural network under a specified sampling multiple and manual characteristics extracted by a characteristic extraction operator;
the fusion characteristic acquisition module is used for fusing the network characteristics and the manual characteristics acquired under the specified sampling multiple to acquire fusion characteristics;
and the lane line acquisition module is used for acquiring the lane lines contained in the image to be detected based on the fusion characteristics.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
storage means for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the methods of any of the embodiments of the present invention.
In a fourth aspect, the embodiments of the present invention also provide a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the method of any of the embodiments of the present invention.
In the embodiment of the invention, the network characteristics containing the core information in the image to be detected are extracted through a neural network based on a small amount of calculation, the manual characteristics extracted through the characteristic extraction operator are supplemented, and the lane line is detected according to the obtained fusion characteristics, so that the calculation amount is reduced, and the accuracy of the detection result can be ensured.
Drawings
Fig. 1A is a flowchart of a lane line detection method according to an embodiment of the present invention;
fig. 1B is a schematic view of an application scenario of lane line detection according to an embodiment of the present invention;
FIG. 1C is a schematic diagram of a horizontal edge feature operator according to an embodiment of the present invention;
FIG. 1D is a schematic diagram of a horizontal feature smoothing operator according to an embodiment of the present invention;
FIG. 1E is a schematic diagram of a feature truncation operator according to an embodiment of the present invention;
FIG. 1F is a diagram of a vertical edge feature operator curve according to an embodiment of the present invention;
FIG. 1G is a schematic diagram of a curve of a vertical feature smoothing operator according to an embodiment of the present invention;
FIG. 1H is a schematic diagram of a horizontal filtering kernel size according to an embodiment of the present invention;
FIG. 1I is a schematic diagram of a vertical filtering kernel size according to an embodiment of the present invention;
fig. 2 is a flowchart of a lane line detection method according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of a lane line detection device according to a third embodiment of the present invention;
fig. 4 is a block diagram of an electronic device according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1A is a flowchart of a lane line detection method provided in an embodiment of the present invention, where the present embodiment is applicable to a case of detecting a lane line, and the method may be executed by a lane line detection apparatus in an embodiment of the present invention, where the apparatus may be implemented in a software and/or hardware manner, and the method in an embodiment of the present invention specifically includes the following steps:
step S101, acquiring network characteristics of an image to be detected, which are extracted through a neural network under a specified sampling multiple, and manual characteristics extracted through a characteristic extraction operator.
As shown in fig. 1B, which is an application scenario diagram of lane line detection provided in this embodiment, the neural network for performing network feature extraction may specifically be a deep convolution neural network, and includes four filter layers, where the sampling multiples corresponding to each filter layer are sequentially increased by 4 times of downsampling, 8 times of downsampling, 16 times of downsampling, and 32 times of downsampling, and also includes a full connection layer. For example, the deep convolutional neural network of the present embodiment may specifically adopt ShuffleNetV2, and network parameters thereof are specifically shown in table 1 below:
TABLE 1
Figure BDA0002978696200000041
Figure BDA0002978696200000051
Optionally, obtaining network features extracted by the neural network of the image to be detected under the specified sampling multiple, and manual features extracted by the feature extraction operator may include: acquiring network characteristics of an image to be detected, which are extracted through a neural network under a specified sampling multiple; acquiring lane horizontal manual features of an image to be detected, which are extracted by a feature extraction operator under a specified sampling multiple; and acquiring the vertical manual features of the lane extracted by the feature extraction operator of the image to be detected under the specified sampling multiple.
Specifically, the specified sampling multiple in this embodiment may be downsampling of any one filter layer included in the deep convolutional neural network, and each filter layer is composed of a convolutional layer, batch normalization, nonlinear activation, and pooling, so that different scale features can be obtained. And each filtering layer can only contain a small number of filter kernels, so the deep convolutional neural network can extract network characteristics containing key information in the image to be detected based on a small amount of calculation. Although the calculation amount of network extraction through the neural network can be greatly reduced, the extracted network features only contain key information, and possible information for accurate lane line detection is not comprehensive enough, so that manual feature extraction can be performed on an image to be detected under the same sampling multiple to obtain manual features, information supplement of the manual features to the network features is realized, and the extracted manual features specifically comprise lane horizontal manual features and lane vertical manual features during manual feature extraction.
Optionally, acquiring lane horizontal manual features of the image to be detected, which are extracted through the feature extraction operator under the specified sampling multiple, may include: acquiring initial horizontal features of an image to be detected, which are extracted by a horizontal edge feature operator under a specified sampling multiple; smoothing the initial horizontal feature through a feature smoothing operator to obtain a horizontal smooth feature; and intercepting the horizontal smooth feature through a feature truncation operator to obtain the horizontal manual feature of the lane.
The lane lines in the image to be detected are mostly bright stripes with colors such as white or yellow, the ground is usually a dark texture, and the edge regions of the lane lines and the ground often have obvious and stable edge features, so that the feature extraction operators can be used for extracting the manual features. Since the manual features include lane-horizontal manual features and lane-vertical manual features, the feature extraction operators used include: a horizontal edge feature operator, a horizontal feature smoothing operator, a vertical edge feature operator, a vertical feature smoothing operator, and a feature truncation operator. Therefore, when lane horizontal manual features of an image to be detected under a specified sampling multiple are extracted, the initial horizontal features can be extracted by using a horizontal edge feature operator in the following formula (1), and as shown in fig. 1C, the curve diagram of the horizontal edge feature operator is shown:
Figure BDA0002978696200000061
wherein, Fin(x, y) represents the pixel value corresponding to the pixel point with the coordinate (x, y) in the image to be detected, i is the step length, w is the size of the horizontal window,
Figure BDA0002978696200000062
and (3) representing the initial horizontal characteristic corresponding to the pixel point with the coordinate (x, y).
For the initial horizontal feature corresponding to each pixel point in the image to be detected, smoothing is performed by using a horizontal feature smoothing operator in the following formula (2) to obtain a horizontal smoothing feature, and as shown in fig. 1D, a curve diagram of the horizontal feature smoothing operator is shown:
Figure BDA0002978696200000063
wherein the content of the first and second substances,
Figure BDA0002978696200000064
representing the initial horizontal characteristics corresponding to the pixel points with coordinates (x, y), i being the step length, w being the horizontal window size,
Figure BDA0002978696200000065
and (3) representing the horizontal smooth characteristic corresponding to the pixel point with the coordinate (x, y).
Aiming at the horizontal smooth feature corresponding to each pixel point in the image to be detected, the feature truncation operator in the following formula (3) is adopted to carry out the stage processing of the maximum feature value, so as to obtain the lane horizontal manual feature, and as shown in fig. 1E, the curve schematic diagram of the feature truncation operator is shown:
Figure BDA0002978696200000071
wherein thresh is a preset threshold, FLevel ofAnd (x, y) is a lane level manual characteristic corresponding to the pixel point with the coordinate of (x, y).
Optionally, the obtaining of the lane vertical manual feature extracted by the feature extraction operator under the specified sampling multiple of the image to be detected may include: acquiring initial vertical features of an image to be detected, which are extracted through a vertical edge feature operator under a specified sampling multiple; smoothing the initial vertical features through a feature smoothing operator to obtain vertical smooth features; and intercepting the vertical smooth feature through a feature truncation operator to obtain the vertical manual feature of the lane.
Specifically, the method for acquiring the vertical manual feature of the lane of the image to be detected in the present embodiment is substantially the same as the method for acquiring the horizontal manual feature of the lane. When lane vertical manual features of an image to be detected under a specified sampling multiple are extracted, the vertical edge feature operator in the following formula (4) can be specifically adopted to extract initial vertical features, and as shown in fig. 1F, the vertical edge feature operator is a curve diagram:
Figure BDA0002978696200000072
wherein, Fin(x, y) represents the pixel value corresponding to the pixel point with the coordinate (x, y) in the image to be detected, i is the step length, h is the size of the vertical window,
Figure BDA0002978696200000073
and (3) representing the initial vertical feature corresponding to the pixel point with the coordinate (x, y).
For the initial vertical feature corresponding to each pixel point in the image to be detected, smoothing is performed by using a vertical feature smoothing operator in the following formula (5) to obtain a vertical smooth feature, and as shown in fig. 1G, a curve diagram of the vertical feature smoothing operator is shown:
Figure BDA0002978696200000081
wherein the content of the first and second substances,
Figure BDA0002978696200000082
representing the initial vertical characteristics corresponding to the pixel points with coordinates (x, y), i being the step length, h being the size of the vertical window,
Figure BDA0002978696200000083
and (3) representing the vertical smooth characteristic corresponding to the pixel point with the coordinate (x, y).
For the vertical smooth feature corresponding to each pixel point in the image to be detected, the feature truncation operator in the following formula (6) is adopted to perform the maximum feature value stage processing to obtain the vertical manual feature of the lane, and as shown in fig. 1E, the curve diagram of the feature truncation operator is shown:
Figure BDA0002978696200000084
wherein thresh is a preset threshold, FIs perpendicular toAnd (x, y) is a vertical manual lane feature corresponding to the pixel point with the coordinate of (x, y). In this embodiment, the curve schematics of the feature truncation operators used for determining the lane horizontal manual feature and the lane vertical manual feature corresponding to each pixel point are the same, and certainly, the curve schematics of the feature truncation operators used in the actual application may be different, but at this time, the preset threshold values in the formula (3) and the formula (6) are different.
When the manual features are extracted through the feature extraction operator, the horizontal window size w and the vertical window size H respectively represent the sizes of the filter kernel in the horizontal direction and the vertical direction, fig. 1H is a schematic size diagram of the horizontal filter kernel, fig. 1I is a schematic size diagram of the vertical filter kernel, the width of any one inner lane line is drawn in detail in the two diagrams, the inner lane line is taken as an example to explain the selection of the sizes of the filter kernel in the horizontal direction and the vertical direction, H0 is an intersection point of the two inner lane lines, and HOHmax is a vertical coordinate axis corresponding to the origin of the coordinate system H0. Determining w and h of lane lines with any height by adopting a linear difference mode, wherein wmin≤w≤wmax,hmin≤h≤hmax
For example, when the size of the image to be detected is 640 × 480, and the feature extraction is performed on the image to be detected by the neural network model according to the 4-fold sampling, the acquired network feature is FMnetAnd the corresponding number of output signature channels is 46. When the feature extraction is carried out manually, the lane horizontal manual feature F corresponding to each pixel pointLevel of(x, y) and the vertical lane manual feature are known, so that the horizontal lane manual feature f of the whole image to be detected can be acquiredLevel ofAnd vertical manual features f of the lane of the image to be detectedIs perpendicular toSampling the extracted f by a factor of 4Level ofAnd fIs perpendicular toAre respectively a multi-dimensional vector, and can be 160 x 120 in size, and are extracted from the image to be detectedLevel ofAnd fIs perpendicular toCan directly acquire the manual characteristics FMhandAnd the number of corresponding output characteristic channels is 2. Of course, in the present embodiment, the network feature and the manual feature extraction are performed on the image to be detected by taking the 4-fold down-sampling as an example, and the network feature and the manual feature extraction at other specified sampling multiples are substantially the same as those described above, for example, fLevel ofAnd fIs perpendicular toThe corresponding size is 80 × 60 under 8-fold down-sampling and 40 × 30 under 16-fold down-sampling.
It should be noted that, in the deep convolutional neural network in this embodiment, the scale of the image to be detected is changed by the convolutional pooling included in each filter layer to obtain the network features under the specified sampling multiple, and when the manual feature extraction is performed, the sampling is performed first to reach the specified sampling multiple, and then the feature extraction is performed, so that the principles of performing the network features and performing the manual extraction by the two methods are different.
And step S102, fusing the network characteristics and the manual characteristics acquired under the specified sampling multiple to acquire fused characteristics.
Optionally, fusing the network features and the manual features acquired under the specified sampling multiple to acquire a fused feature, which may include: carrying out dimension splicing on the network characteristics, the lane horizontal manual characteristics and the lane vertical manual characteristics which are obtained under the specified sampling multiple according to channels to obtain splicing characteristics; determining a weight under each channel according to an attention mechanism; and multiplying the weight under each channel with the splicing characteristics to obtain fusion characteristics.
Specifically, in the embodiment, the network features and the manual features acquired under the specified sampling multiple are fused to acquire the fusion features, so that the fusion features include both the network features of the key information in the image to be detected and the manual features of the supplementary information in the image to be detected. When the deep convolutional neural network comprises a plurality of filter layers, the fusion features obtained by fusing the previous filter layer can be used as the input of the next adjacent filter layer, and the extraction of the network features and the manual features can be performed on the specified sampling multiples corresponding to the next filter layer.
And S103, acquiring the lane lines contained in the image to be detected based on the fusion characteristics.
Optionally, before acquiring the lane line included in the image to be detected based on the fusion feature, the method may further include: inputting the fusion characteristics into the last filter layer in the neural network, wherein the sampling multiple corresponding to the last filter layer is greater than the designated sampling multiple; and performing feature extraction on the fusion features through a filter layer to obtain detection features.
Optionally, obtaining the lane line included in the image to be detected based on the fusion feature may include: processing the detection characteristics through a full connection layer in the neural network to obtain curve parameters corresponding to each lane line; and determining the lane lines contained in the image to be detected according to the curve parameters corresponding to each lane line.
Specifically, in this embodiment, before acquiring the lane line included in the image to be detected based on the fusion feature, the finally acquired fusion feature needs to be input to the last filtering layer in the deep convolutional neural network, for example, as shown in fig. 1B, the fusion feature acquired by down-sampling the image to be detected by 4 times is input to the filtering layer corresponding to the down-sampling by 32 times in the deep convolutional neural network, and the feature extraction is performed on the acquired fusion feature through the last filtering layer and the full connection layer to acquire the detection feature. The shuffle netv2 deep neural network used in this embodiment includes two full-connected layers, and it can be seen from table 1 that the number of output characteristic channels corresponding to the first full-connected layer is 128, and the number of output characteristic channels corresponding to the second full-connected layer is 16. Since four lane lines are usually included in a real lane, each lane line satisfies a unitary cubic function y, a0+ a1x + a2x2+a3x3The one-dimensional vectors (a0, a1, … a15) are output through the second connection layer, so that the one-dimensional vectors correspond to the curve parameters of 4 lane lines × 4 times, namely, a1 to a4 in the one-dimensional vectors correspond to a0 to A3 in the first lane line, and the curve parameters a0 to A3 corresponding to each lane line are obtained in sequence by analogy, so that the curve parameters to be determined according to the curve parameters corresponding to each lane lineThe lane lines included in the image are detected.
In the embodiment of the invention, the network characteristics containing the core information in the image to be detected are extracted through a neural network based on a small amount of calculation, the manual characteristics extracted through the characteristic extraction operator are supplemented, and the lane line is detected according to the obtained fusion characteristics, so that the calculation amount is reduced, and the accuracy of the detection result can be ensured.
Example two
Fig. 2 is a flowchart of a lane line detection method according to an embodiment of the present invention, and the embodiment of the present invention specifically describes that a network feature and a manual feature acquired under a specified sampling multiple are fused to acquire a fused feature based on the above embodiment.
As shown in fig. 2, the method of the embodiment of the present disclosure specifically includes:
step S201, acquiring network characteristics extracted by a neural network of an image to be detected under a specified sampling multiple and manual characteristics extracted by a characteristic extraction operator, wherein the manual characteristics comprise lane horizontal manual characteristics and lane vertical manual characteristics.
And S202, carrying out dimension splicing on the network characteristics, the lane horizontal manual characteristics and the lane vertical manual characteristics which are acquired under the specified sampling multiple according to channels to acquire splicing characteristics.
Specifically, the network characteristics obtained under the specified sampling multiple in the embodiment are FMnetIncluding lane level manual features fLevel ofHand-made feature perpendicular to the lane fIs perpendicular toIs characterized by FMhand. Performing characteristic splicing by adopting the following formula (7):
Figure BDA0002978696200000121
wherein F is a splicing feature, FMnetFor network features, FMhandIn order to be a manual feature,
Figure BDA0002978696200000122
is a scale match.
For example, the network feature FM obtained based on 4-fold down-samplingnetFor outputting a multidimensional matrix with the characteristic channel number of 46, manual characteristic FM acquired based on 4-time down samplinghandFor a multidimensional matrix with the output channel number of 2, the obtained splicing characteristics can be determined to be a multidimensional matrix with the output channel number of 48 by combining the above formula (7).
In step S203, the weight under each channel is determined according to the attention mechanism.
Specifically, after the number of output channels of the splicing feature is determined, the attention mechanism may be used to determine the weight of each channel, and the working principle of the attention mechanism is not the key point of the present application, so that no further description is given in this embodiment.
And S204, multiplying the weight under each channel with the splicing characteristics to obtain fusion characteristics.
After the weight under each channel is obtained, multiplying the weight under each channel by the splicing feature by adopting the following formula (8) to obtain a fusion feature FMfused
Figure BDA0002978696200000123
Obtaining fusion characteristics FM corresponding to 4 times of down samplingfusedAfter that, FM can be performedfusedAs for the input of the next adjacent filter layer, the description is given only by taking the example of acquiring the fusion feature under the 4-fold down-sampling in the present embodiment, and the manner of acquiring the fusion feature under the other sampling fold is substantially the same as this, and the description thereof is omitted in the present embodiment.
And S205, acquiring the lane lines contained in the image to be detected based on the fusion characteristics.
In the embodiment of the invention, the network characteristics containing the core information in the image to be detected are extracted through a neural network based on a small amount of calculation, the manual characteristics extracted through the characteristic extraction operator are supplemented, and the lane line is detected according to the obtained fusion characteristics, so that the calculation amount is reduced, and the accuracy of the detection result can be ensured. And the obtained fusion features are more accurate by multiplying the weight of each channel with the obtained splicing features under the appointed sampling multiple.
EXAMPLE III
Fig. 3 is a schematic structural diagram of a lane line detection device provided in an embodiment of the present invention, which specifically includes: a network feature and manual feature acquisition module 310, a fused feature acquisition module 320, and a lane line acquisition module 330.
A network feature and manual feature obtaining module 310, configured to obtain a network feature extracted by the to-be-detected image through a neural network under a specified sampling multiple, and a manual feature extracted by a feature extraction operator;
a fused feature obtaining module 320, configured to fuse the network features and the manual features obtained under the specified sampling multiple to obtain a fused feature;
and a lane line obtaining module 330, configured to obtain a lane line included in the image to be detected based on the fusion feature.
Optionally, the network feature and manual feature obtaining module includes: the network characteristic acquisition subunit is used for acquiring network characteristics extracted by the image to be detected through a neural network under the specified sampling multiple;
the lane horizontal manual feature acquisition subunit is used for acquiring lane horizontal manual features extracted by a feature extraction operator of an image to be detected under a specified sampling multiple;
and the lane vertical manual feature acquisition subunit is used for acquiring the lane vertical manual features extracted by the feature extraction operator of the image to be detected under the specified sampling multiple.
Optionally, the lane horizontal manual feature obtaining subunit is configured to obtain an initial horizontal feature extracted by a horizontal edge feature operator from the image to be detected under the specified sampling multiple;
smoothing the initial horizontal feature through a horizontal feature smoothing operator to obtain a horizontal smooth feature;
and intercepting the horizontal smooth feature through a feature truncation operator to obtain the horizontal manual feature of the lane.
Optionally, the lane vertical manual feature obtaining subunit is configured to obtain an initial vertical feature extracted by a vertical edge feature operator from the image to be detected under the specified sampling multiple;
smoothing the initial vertical feature through a vertical feature smoothing operator to obtain a vertical smooth feature;
and intercepting the vertical smooth feature through a feature truncation operator to obtain the vertical manual feature of the lane.
Optionally, the fusion feature obtaining module is configured to perform dimension splicing on the network features, the lane horizontal manual features and the lane vertical manual features obtained under the specified sampling multiple according to channels to obtain splicing features;
determining a weight under each channel according to an attention mechanism;
and multiplying the weight under each channel with the splicing characteristics to obtain fusion characteristics.
Optionally, the apparatus further includes a detection feature obtaining module, configured to input the fusion feature into a last filtering layer in the neural network, where a sampling multiple corresponding to the last filtering layer is greater than a specified sampling multiple;
and performing feature extraction on the fusion features through a filter layer to obtain detection features.
Optionally, the lane line obtaining module is configured to: processing the detection characteristics through a full connection layer in the neural network to obtain curve parameters corresponding to each lane line;
and determining the lane lines contained in the image to be detected according to the curve parameters corresponding to each lane line.
The device can execute the lane line detection method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method. For technical details not described in detail in this embodiment, reference may be made to the method provided in any embodiment of the present invention.
Example four
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. FIG. 4 illustrates a block diagram of an exemplary electronic device 412 suitable for use in implementing embodiments of the present invention. The electronic device 412 shown in fig. 4 is only an example and should not bring any limitations to the functionality and scope of use of the embodiments of the present invention.
As shown in fig. 4, the electronic device 412 is in the form of a general purpose computing device. The components of the electronic device 412 may include, but are not limited to: one or more processors 416, a memory 428, and a bus 418 that couples the various system components (including the memory 428 and the processors 416).
Bus 418 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 412 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 412 and includes both volatile and nonvolatile media, removable and non-removable media.
The memory 428 is used to store instructions. Memory 428 can include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)430 and/or cache memory 432. The electronic device 412 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 434 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 4, commonly referred to as a "hard drive"). Although not shown in FIG. 4, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 418 by one or more data media interfaces. Memory 428 can include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 440 having a set (at least one) of program modules 442 may be stored, for instance, in memory 428, such program modules 442 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. The program modules 442 generally perform the functions and/or methodologies of the described embodiments of the invention.
The electronic device 412 may also communicate with one or more external devices 414 (e.g., keyboard, pointing device, display 424, etc.), with one or more devices that enable a user to interact with the electronic device 412, and/or with any devices (e.g., network card, modem, etc.) that enable the electronic device 412 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 422. Also, the electronic device 412 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) through the network adapter 420. As shown, network adapter 420 communicates with the other modules of electronic device 412 over bus 418. It should be appreciated that although not shown in FIG. 4, other hardware and/or software modules may be used in conjunction with the electronic device 412, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processor 416, by executing instructions stored in the memory 428, performs the lane line detection method: acquiring network characteristics extracted by a neural network of an image to be detected under a specified sampling multiple and manual characteristics extracted by a characteristic extraction operator; fusing the network characteristics and manual characteristics acquired under the specified sampling multiple to acquire fused characteristics; and acquiring the lane lines contained in the image to be detected based on the fusion characteristics.
EXAMPLE five
An embodiment of the present invention further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, perform a lane line detection method, the method including:
acquiring network characteristics extracted by a neural network of an image to be detected under a specified sampling multiple and manual characteristics extracted by a characteristic extraction operator; fusing the network characteristics and manual characteristics acquired under the specified sampling multiple to acquire fused characteristics; and acquiring the lane lines contained in the image to be detected based on the fusion characteristics.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk, or an optical disk of a computer, and includes several instructions for enabling an electronic device (which may be a personal computer, a server, or a network device) to execute the lane line detection method according to the embodiments of the present invention.
It should be noted that, the units and modules included in the above embodiments are merely divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A lane line detection method is characterized by comprising the following steps:
acquiring network characteristics extracted by a neural network of an image to be detected under a specified sampling multiple and manual characteristics extracted by a characteristic extraction operator;
fusing the network characteristics and manual characteristics acquired under the specified sampling multiple to acquire fused characteristics;
and acquiring the lane lines contained in the image to be detected based on the fusion characteristics.
2. The method of claim 1, wherein the acquiring network features extracted by a neural network under a specified sampling multiple of the image to be detected and the manual features extracted by a feature extraction operator comprises:
acquiring network characteristics of an image to be detected, which are extracted through a neural network under a specified sampling multiple;
acquiring lane horizontal manual features of an image to be detected, which are extracted by a feature extraction operator under a specified sampling multiple;
and acquiring the vertical manual features of the lane extracted by the feature extraction operator of the image to be detected under the specified sampling multiple.
3. The method according to claim 2, wherein the acquiring of lane level manual features extracted by a feature extraction operator under a specified sampling multiple of the image to be detected comprises:
acquiring initial horizontal features of an image to be detected, which are extracted by a horizontal edge feature operator under a specified sampling multiple;
smoothing the initial horizontal feature through a horizontal feature smoothing operator to obtain a horizontal smooth feature;
and intercepting the horizontal smooth feature through a feature truncation operator to obtain the horizontal manual feature of the lane.
4. The method as claimed in claim 3, wherein the step of obtaining the lane vertical manual features extracted by the feature extraction operator under the specified sampling multiple of the image to be detected comprises the following steps:
acquiring initial vertical features of an image to be detected, which are extracted through a vertical edge feature operator under a specified sampling multiple;
smoothing the initial vertical feature through a vertical feature smoothing operator to obtain a vertical smooth feature;
and intercepting the vertical smooth feature through a feature truncation operator to obtain the vertical manual feature of the lane.
5. The method according to claim 4, wherein fusing the network features and the manual features obtained under the specified sampling multiple to obtain fused features comprises:
carrying out dimension splicing on the network characteristics, the lane horizontal manual characteristics and the lane vertical manual characteristics which are obtained under the specified sampling multiple according to channels to obtain splicing characteristics;
determining a weight under each channel according to an attention mechanism;
and multiplying the weight under each channel with the splicing characteristic to obtain the fusion characteristic.
6. The method according to claim 1, wherein before the obtaining of the lane line included in the image to be detected based on the fused feature, the method further comprises:
inputting the fusion features into the last filter layer in the neural network, wherein the sampling multiple corresponding to the last filter layer is larger than the specified sampling multiple;
and performing feature extraction on the fusion features through the filter layer to obtain detection features.
7. The method according to claim 6, wherein the obtaining of the lane lines included in the image to be detected based on the fused features comprises:
processing the detection characteristics through a full connection layer in the neural network to obtain curve parameters corresponding to each lane line;
and determining the lane lines contained in the image to be detected according to the curve parameters corresponding to each lane line.
8. A lane line detection apparatus, characterized in that the apparatus comprises:
the network characteristic and manual characteristic acquisition module is used for acquiring network characteristics extracted by the image to be detected through a neural network under a specified sampling multiple and manual characteristics extracted by a characteristic extraction operator;
the fusion characteristic acquisition module is used for fusing the network characteristics and the manual characteristics acquired under the specified sampling multiple to acquire fusion characteristics;
and the lane line acquisition module is used for acquiring the lane lines contained in the image to be detected based on the fusion characteristics.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN202110281549.2A 2021-03-16 2021-03-16 Lane line detection method and device, electronic equipment and storage medium Pending CN112966624A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110281549.2A CN112966624A (en) 2021-03-16 2021-03-16 Lane line detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110281549.2A CN112966624A (en) 2021-03-16 2021-03-16 Lane line detection method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112966624A true CN112966624A (en) 2021-06-15

Family

ID=76278150

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110281549.2A Pending CN112966624A (en) 2021-03-16 2021-03-16 Lane line detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112966624A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115376082A (en) * 2022-08-02 2022-11-22 北京理工大学 Lane line detection method integrating traditional feature extraction and deep neural network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115376082A (en) * 2022-08-02 2022-11-22 北京理工大学 Lane line detection method integrating traditional feature extraction and deep neural network

Similar Documents

Publication Publication Date Title
CN111553406B (en) Target detection system, method and terminal based on improved YOLO-V3
CN111860398B (en) Remote sensing image target detection method and system and terminal equipment
CN110378966B (en) Method, device and equipment for calibrating external parameters of vehicle-road coordination phase machine and storage medium
CN111539428A (en) Rotating target detection method based on multi-scale feature integration and attention mechanism
CN111597884A (en) Facial action unit identification method and device, electronic equipment and storage medium
JP2020187736A (en) Learning data generation method for classifier learning having regional features, and system thereof
CN111595850A (en) Slice defect detection method, electronic device and readable storage medium
CN113379786B (en) Image matting method, device, computer equipment and storage medium
WO2023151237A1 (en) Face pose estimation method and apparatus, electronic device, and storage medium
CN111144215B (en) Image processing method, device, electronic equipment and storage medium
US20210166058A1 (en) Image generation method and computing device
CN113569968B (en) Model training method, target detection method, device, equipment and storage medium
CN113850136A (en) Yolov5 and BCNN-based vehicle orientation identification method and system
CN110852327A (en) Image processing method, image processing device, electronic equipment and storage medium
CN113989616A (en) Target detection method, device, equipment and storage medium
CN112966624A (en) Lane line detection method and device, electronic equipment and storage medium
CN113989287A (en) Urban road remote sensing image segmentation method and device, electronic equipment and storage medium
CN112465050B (en) Image template selection method, device, equipment and storage medium
CN110210279A (en) Object detection method, device and computer readable storage medium
CN111414823B (en) Human body characteristic point detection method and device, electronic equipment and storage medium
CN111862343A (en) Three-dimensional reconstruction method, device and equipment and computer readable storage medium
CN111027551B (en) Image processing method, apparatus and medium
CN111931794B (en) Sketch-based image matching method
CN111160265B (en) File conversion method and device, storage medium and electronic equipment
CN112132031A (en) Vehicle money identification method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination