CN112418187A - Lane line recognition method and apparatus, storage medium, and electronic device - Google Patents

Lane line recognition method and apparatus, storage medium, and electronic device Download PDF

Info

Publication number
CN112418187A
CN112418187A CN202011478436.3A CN202011478436A CN112418187A CN 112418187 A CN112418187 A CN 112418187A CN 202011478436 A CN202011478436 A CN 202011478436A CN 112418187 A CN112418187 A CN 112418187A
Authority
CN
China
Prior art keywords
gray value
value
gray
pixel
interval
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011478436.3A
Other languages
Chinese (zh)
Inventor
吕轩轩
孟辉磊
孙凯信
苑鑫鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weichai Power Co Ltd
Original Assignee
Weichai Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Weichai Power Co Ltd filed Critical Weichai Power Co Ltd
Priority to CN202011478436.3A priority Critical patent/CN112418187A/en
Publication of CN112418187A publication Critical patent/CN112418187A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/35Determination of transform parameters for the alignment of images, i.e. image registration using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/44Analysis of texture based on statistical description of texture using image operators, e.g. filters, edge density metrics or local histograms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a lane line identification method and device, a storage medium and electronic equipment, wherein the method comprises the following steps: acquiring a road image, and performing mean value calculation on gray values of all pixels in the road image; matching a target illumination intensity type corresponding to the gray level mean value of the road image from a plurality of preset illumination intensity types; determining a plurality of gray value intervals corresponding to the target illumination intensity type and a linear transformation function corresponding to each gray value interval; for each pixel, performing linear transformation on the gray value of the pixel by using a linear transformation function corresponding to the gray value interval in which the gray value of the pixel is positioned; respectively carrying out gradient calculation on the gray value of each pixel after linear transformation by using a horizontal Sobel operator and a vertical Sobel operator to obtain a gradient corresponding to each pixel; and determining the pixels with the gradient larger than the preset threshold value as the edge points of the lane lines in the road image, so as to realize accurate identification of the lane lines under each illumination condition.

Description

Lane line recognition method and apparatus, storage medium, and electronic device
Technical Field
The present disclosure relates to the field of road image recognition technologies, and in particular, to a method and an apparatus for recognizing a lane line, a storage medium, and an electronic device.
Background
The visual-based structured lane line identification is one of the core contents of the assistant driving, so the accuracy of the lane line identification influences the accuracy of the assistant driving, and is one of the keys for ensuring the driving safety.
In the conventional lane recognition mode, a road image is acquired, and then a lane line in the road image is detected directly according to the difference of a roadside and the lane line in the road image on certain characteristics, such as color, gray scale and the like.
However, the existing method cannot be changed adaptively according to the change of illumination. Therefore, under weak illumination intensity or complex illumination conditions, the lane lines in the road image cannot be accurately recognized due to the reduced contrast between the lane marking lines and the road surface.
Disclosure of Invention
Based on the defects of the prior art, the application provides a lane line identification method and device, a storage medium and electronic equipment, so as to solve the problem that the lane line cannot be accurately identified under different illumination conditions in the existing mode.
In order to achieve the above object, the present application provides the following technical solutions:
the first aspect of the present application provides a lane line identification method, including:
acquiring a road image, and performing mean value calculation on gray values of all pixels in the road image to obtain a gray mean value of the road image;
matching a target illumination intensity type corresponding to the gray level mean value of the road image from a plurality of preset illumination intensity types;
determining a plurality of gray value intervals corresponding to the target illumination intensity type and a linear transformation function corresponding to each gray value interval; the gray value interval is obtained when the lane line and the road surface have the maximum contrast after linear transformation through multiple times of adjustment in advance;
for each pixel, performing linear transformation on the gray value of the pixel by using a linear transformation function corresponding to the gray value interval in which the gray value of the pixel is located;
respectively carrying out gradient calculation on the gray value of each pixel after linear transformation by using a horizontal Sobel operator and a vertical Sobel operator to obtain a gradient corresponding to each pixel;
and determining the pixels with the gradient larger than a preset threshold value as the edge points of the lane lines in the road image.
Optionally, in the foregoing method, the calculating a mean value of the gray scale values of all pixels in the road image to obtain a mean value of the gray scale values of the road image includes:
calculating the ratio of the number of the pixels corresponding to each gray value to the total number of the pixels of the road image to obtain a probability value corresponding to each gray value;
calculating the product of each gray value and the corresponding probability value of the gray value to obtain a weighted average value corresponding to each gray value;
and calculating the sum of the weighted average values corresponding to all the gray values to obtain the gray average value of the road image.
Optionally, in the foregoing method, the determining a plurality of gray value intervals corresponding to the target illumination intensity type and a linear transformation function corresponding to each gray value interval includes:
determining the coordinates of a first control point and the coordinates of a second control point corresponding to the target illumination intensity type; the coordinates of the first control point comprise a first original gray value and a first conversion gray value; the coordinates of the second control point comprise a second original gray value and a second transformed gray value; the first original gray value is smaller than the second original gray value, and the first converted gray value is smaller than the second converted gray value;
determining three gray value intervals corresponding to the target illumination intensity type according to the first original gray value and the second original gray value; the gray value interval comprises a first gray value interval, a second gray value interval and a third gray value interval; the first gray value interval is greater than or equal to 0 and smaller than the first original gray value; the second gray value interval is greater than or equal to the first original gray value and smaller than the second original gray value; the third gray value interval is greater than or equal to the second original gray value and is less than or equal to 255;
respectively determining linear transformation functions corresponding to the first gray value interval, the second gray value interval and the third gray value interval according to the coordinates of the first control point and the coordinates of the second control point; the linear transformation function corresponding to the first gray value interval is a linear equation corresponding to the origin and the line segment where the first control point is located; the linear transformation function corresponding to the second gray value interval is a linear equation corresponding to a line segment where the first control point and the second control point are located; the linear transformation function corresponding to the third gray value interval is a linear equation corresponding to the line segment where the second control point and the maximum value point are located; the coordinate values of the maximum point are all 255.
Optionally, in the above method, the performing, for each of the pixels, a linear transformation on the gray scale value of the pixel by using a linear transformation function corresponding to the gray scale value interval in which the gray scale value of the pixel is located includes:
if the gray value of the pixel is within the first gray value interval, multiplying the gray value of the pixel by a first conversion slope to obtain the gray value of the pixel after linear conversion; wherein the first transformation slope is a ratio of the first transformation gray value to the first original gray value;
if the gray value of the pixel is in the second gray value interval, multiplying the difference value between the gray value of the pixel and the first original gray value by a second conversion slope, and adding the obtained product to the first conversion gray value to obtain the gray value of the pixel after linear conversion; the second transformation slope is the ratio of the difference value of the second transformation gray value minus the first transformation gray value to the difference value of the second original gray value minus the first original gray value;
if the gray value of the pixel is within the third gray value interval, multiplying the difference value between the gray value of the pixel and the second original gray value by a third conversion slope, and adding the obtained product to the second conversion gray value to obtain the gray value of the pixel after linear conversion; the third transformation slope is a ratio of a difference value obtained by subtracting the second transformation gray value from 255 to a difference value obtained by subtracting the second original gray value from 255.
Optionally, in the foregoing method, the performing gradient calculation on the gray scale value of each pixel after linear transformation by using a horizontal sobel operator and a vertical sobel operator to obtain a gradient corresponding to each pixel includes:
respectively substituting the gray value of each pixel after linear transformation into the horizontal Sobel operator and the vertical Sobel operator to carry out calculation so as to obtain a first calculation result and a second calculation result corresponding to the pixel;
and performing an evolution operation on the square sum of the first calculation result and the second calculation result corresponding to each pixel to obtain a gradient corresponding to each pixel.
Optionally, in the above method, after determining the pixel with the gradient greater than the preset threshold as the edge point of the lane line in the road image, the method further includes:
and fitting the unitary cubic function by a least square method based on the position index values of the edge points to obtain a fitting function of the vehicle running track.
The second aspect of the present application provides a lane line identification apparatus, including:
an acquisition unit configured to acquire a road image;
the mean value calculation unit is used for carrying out mean value calculation on the gray values of all pixels in the road image to obtain a gray mean value of the road image;
the classification unit is used for matching a target illumination intensity type corresponding to the gray average value of the road image from a plurality of preset illumination intensity types;
the interval determining unit is used for determining a plurality of gray value intervals corresponding to the target illumination intensity types and a linear transformation function corresponding to each gray value interval; the gray value interval is obtained when the lane line and the road surface have the maximum contrast after linear transformation through multiple times of adjustment in advance;
the gray level conversion unit is used for carrying out linear conversion on the gray level value of the pixel by utilizing a linear conversion function corresponding to the gray level value interval where the gray level value of the pixel is located for each pixel;
the gradient calculation unit is used for respectively carrying out gradient calculation on the gray value of each pixel after linear transformation by utilizing a horizontal Sobel operator and a vertical Sobel operator to obtain the gradient corresponding to each pixel;
and the edge determining unit is used for determining the pixels with the gradient larger than a preset threshold value as the edge points of the lane lines in the road image.
Optionally, in the above apparatus, the mean value calculating unit includes:
the first calculation unit is used for calculating the ratio of the number of the pixels corresponding to each gray value to the total number of the pixels of the road image to obtain the probability value corresponding to each gray value;
the second calculation unit is used for calculating the product of each gray value and the corresponding probability value thereof to obtain a weighted average value corresponding to each gray value;
and the third calculating unit is used for calculating the sum of the weighted average values corresponding to all the gray values to obtain the gray average value of the road image.
Optionally, in the above apparatus, the section determining unit includes:
the control point determining unit is used for determining the coordinates of the first control point and the coordinates of the second control point corresponding to the target illumination intensity type; the coordinates of the first control point comprise a first original gray value and a first conversion gray value; the coordinates of the second control point comprise a second original gray value and a second transformed gray value; the first original gray value is smaller than the second original gray value, and the first converted gray value is smaller than the second converted gray value;
the interval dividing unit is used for determining three gray value intervals corresponding to the target illumination intensity type according to the first original gray value and the second original gray value; the gray value interval comprises a first gray value interval, a second gray value interval and a third gray value interval; the first gray value interval is greater than or equal to 0 and smaller than the first original gray value; the second gray value interval is greater than or equal to the first original gray value and smaller than the second original gray value; the third gray value interval is greater than or equal to the second original gray value and is less than or equal to 255;
a function determining unit, configured to determine, according to the coordinates of the first control point and the coordinates of the second control point, linear transformation functions corresponding to the first gray value interval, the second gray value interval, and the third gray value interval, respectively; the linear transformation function corresponding to the first gray value interval is a linear equation corresponding to the origin and the line segment where the first control point is located; the linear transformation function corresponding to the second gray value interval is a linear equation corresponding to a line segment where the first control point and the second control point are located; the linear transformation function corresponding to the third gray value interval is a linear equation corresponding to the line segment where the second control point and the maximum value point are located; the coordinate values of the maximum point are all 255.
Optionally, in the above apparatus, the gray scale conversion unit is configured to, when performing the linear conversion on the gray scale value of the pixel by using a linear conversion function corresponding to the gray scale value interval in which the gray scale value of the pixel is located for each of the pixels, perform:
if the gray value of the pixel is within the first gray value interval, multiplying the gray value of the pixel by a first conversion slope to obtain the gray value of the pixel after linear conversion; wherein the first transformation slope is a ratio of the first transformation gray value to the first original gray value;
if the gray value of the pixel is in the second gray value interval, multiplying the difference value between the gray value of the pixel and the first original gray value by a second conversion slope, and adding the obtained product to the first conversion gray value to obtain the gray value of the pixel after linear conversion; the second transformation slope is the ratio of the difference value of the second transformation gray value minus the first transformation gray value to the difference value of the second original gray value minus the first original gray value;
if the gray value of the pixel is within the third gray value interval, multiplying the difference value between the gray value of the pixel and the second original gray value by a third conversion slope, and adding the obtained product to the second conversion gray value to obtain the gray value of the pixel after linear conversion; the third transformation slope is a ratio of a difference value obtained by subtracting the second transformation gray value from 255 to a difference value obtained by subtracting the second original gray value from 255.
Optionally, in the above apparatus, the gradient calculating unit includes:
the fourth calculation unit is used for substituting the gray value of each pixel after linear transformation into the horizontal Sobel operator and the vertical Sobel operator respectively to perform calculation so as to obtain a first calculation result and a second calculation result corresponding to the pixel;
and the fifth calculation unit is used for performing square operation on the sum of squares of the first calculation result and the second calculation result corresponding to each pixel to obtain the gradient corresponding to each pixel.
Optionally, in the above apparatus, further comprising:
and the fitting unit is used for fitting the unitary cubic function by a least square method based on the position index values of the edge points to obtain a fitting function of the vehicle running track.
A third aspect of the present application provides a storage medium for storing a computer program for implementing the lane line identification method according to any one of the above when the computer program is executed.
A fourth aspect of the present application provides an electronic device, comprising:
a memory and a processor;
wherein the memory is used for storing programs;
the processor is configured to execute the program, and when the program is executed, the program is specifically configured to implement the lane line identification method according to any one of the above items.
The application provides a lane line identification method, which includes the steps of obtaining a road image, calculating the mean value of gray values of all pixels in the road image to obtain the mean value of the gray values of the road image, classifying the road image from the illumination intensity based on the mean value of the gray values of the road image, and further adaptively performing gray value conversion on the road image. The method comprises the steps of determining a plurality of gray value intervals corresponding to target illumination intensity types and a linear transformation function corresponding to each gray value interval, performing linear transformation on the gray values of pixels by using the linear transformation function corresponding to the gray value interval where the gray values of the pixels are located for each pixel, so that the contrast between a lane line and a road surface is effectively enhanced, finally performing gradient calculation on the gray values of each pixel after the linear transformation by using a horizontal Sobel operator and a vertical Sobel operator respectively, obtaining the gradient corresponding to each pixel, determining the pixels with the gradients larger than a preset threshold value as edge points of the lane line in a road image, namely identifying the lane line, and accordingly effectively identifying the lane line in the road image under different illumination conditions.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a lane line identification method according to an embodiment of the present disclosure;
fig. 2 is a flowchart of a method for calculating a mean gray level of a road image according to another embodiment of the present application;
fig. 3 is a flowchart of a method for determining a gray scale value interval and a corresponding linear transformation function according to another embodiment of the present application;
FIG. 4 is a flowchart illustrating a method for calculating a gradient corresponding to a pixel according to another embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a lane line identification apparatus according to another embodiment of the present application;
fig. 6 is a schematic structural diagram of a mean value calculating unit according to another embodiment of the present application;
fig. 7 is a schematic structural diagram of an interval determining unit according to another embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to another embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In this application, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The embodiment of the application provides a lane line identification method, as shown in fig. 1, specifically comprising the following steps:
s101, acquiring a road image, and performing mean value calculation on gray values of all pixels in the road image to obtain a gray mean value of the road image.
It should be noted that, since the gray-scale value of the pixel of the obtained road image is larger when the illumination intensity is larger, in the embodiment of the present application, the current illumination intensity is determined by the mean value of the gray-scale values of all pixels in the road image, so as to perform corresponding processing on the road image according to different illumination intensities, so as to increase the contrast between the road surface and the lane line in the road image.
The gray level average value of the road image may be an arithmetic mean value of gray levels of all pixels of the road image, or may be a weighted mean value.
Optionally, in another embodiment of the present application, a specific implementation manner of step S101 is provided, as shown in fig. 2, and includes:
s201, calculating the ratio of the number of the pixels corresponding to each gray value to the total number of the pixels of the road image to obtain the probability value corresponding to each gray value.
It should be noted that, in the embodiment of the present application, a weighted average of the gray scale values of the pixels of the road image is calculated, so that the proportion of the pixels belonging to each gray scale value in the road image is first calculated in step S201, that is, the weight of each gray scale value is obtained.
S202, calculating the product of each gray value and the corresponding probability value of each gray value to obtain the weighted average value corresponding to each gray value.
And S203, calculating the sum of the weighted average values corresponding to all the gray values to obtain the gray average value of the road image.
As can be known from steps S201 to S203, the formula for calculating the mean value of the gray scale of the road image in the present application is:
Figure BDA0002836524800000081
wherein:
Figure BDA0002836524800000082
nithe number of pixels representing a certain gray value, N the total number of pixels of the road image, and x (i) the probability corresponding to the gray value.
S102, matching a target illumination intensity type corresponding to the gray level mean value of the road image from a plurality of preset illumination intensity types.
Specifically, the illumination intensity may be divided into a plurality of sections in advance, the gray level average range of the corresponding road image in each illumination intensity section is calculated, and each illumination intensity section is determined as an illumination intensity type. Therefore, when the gray level mean value of the road image is obtained through calculation, the target illumination intensity type corresponding to the gray level mean value of the road image is matched from the preset illumination intensity types according to the gray level mean value range where the road image falls.
S103, determining a plurality of gray value intervals corresponding to the target illumination intensity type and a linear transformation function corresponding to each gray value interval.
It should be noted that the lane lines are in a bright color and have a large light reflection capability, while the road surfaces are in a dark color and have a small tendency to reflect light, so that the gray values of the pixels of the image corresponding to the lane lines and the road surfaces are usually in two different gray value ranges. Therefore, in the embodiment of the present application, the gray scale value of the pixel is increased or decreased by using the linear transformation function for the pixels in the plurality of different gray scale value intervals, so that the contrast between the road surface and the lane line in the image can be increased.
Because the gray value intervals of the pixels of the image corresponding to the lane line and the road surface are changed along with the different illumination intensities, and the difference degrees of the gray value intervals of the lane line and the road surface are different, for example, when the illumination intensity is poor, the difference degrees of the gray value intervals of the lane line and the road surface are smaller. Therefore, in order to increase the contrast between the lane line and the road surface by accurately performing the linear conversion of the gray scale values, it is necessary to adaptively change the gray scale value intervals and the linear transformation functions corresponding to the respective gray scale value intervals under different illumination intensities.
In the embodiment of the present application, each gray value interval is obtained by adjusting for multiple times in advance, so that the lane line and the road surface have the maximum contrast after linear transformation. Specifically, under the illumination intensity corresponding to each illumination intensity type, continuously adjusting the gray value interval and correspondingly adjusting the parameter of the linear transformation function corresponding to each gray value interval until the lane line and the road surface have the maximum contrast, stopping the adjustment, setting each gray value interval at the moment as the gray value interval corresponding to the current illumination intensity type, and simultaneously setting each linear transformation function at the moment as the linear transformation function corresponding to each gray value interval.
Optionally, in another embodiment of the present application, as shown in fig. 3, a specific implementation method of step S103 includes the following steps:
s301, determining coordinates of a first control point and coordinates of a second control point corresponding to the target illumination intensity type, wherein the coordinates of the first control point comprise a first original gray value and a first conversion gray value, and the coordinates of the second control point comprise a second original gray value and a second conversion gray value.
The original gray value refers to a gray value of a pixel on the road image when the road image is acquired, and the converted gray value is a gray value obtained by linearly converting the gray value of the pixel. The first original gray value is smaller than the second original gray value, and the first converted gray value is smaller than the second converted gray value. Moreover, since the range of the gray-level values is from 0 to 255, the first original gray-level value, the second original gray-level value, the first conversion gray-level value and the second conversion gray-level value are not less than 0 and not more than 255.
It should be noted that, in the embodiment of the present application, the adjustment of the gray value interval is controlled by the coordinates of the two control points, and the corresponding linear transformation function is determined.
S302, determining three gray value intervals corresponding to the target illumination intensity type according to the first original gray value and the second original gray value.
Since the range of gray values is from 0 value 255, while the first and second original gray values are two values of 0 to 255, the range of gray values is just divided into three gray value intervals. That is, the gray value intervals in the embodiment of the present application include three, specifically, they are respectively: the display device comprises a first gray value interval, a second gray value interval and a third gray value interval.
Specifically, the first gray value interval is greater than or equal to 0 and smaller than the first original gray value. The second gray value interval is greater than or equal to the first original gray value and smaller than the second original gray value. The third gray value interval is greater than or equal to the second original gray value and is less than or equal to 255.
And S303, respectively determining linear transformation functions corresponding to the first gray value interval, the second gray value interval and the third gray value interval according to the coordinates of the first control point and the coordinates of the second control point.
Specifically, in the coordinate system where the control point is located, it is obvious that one coordinate axis corresponds to the original gray value, the other coordinate axis corresponds to the transformed coordinate value, the range of the original gray value is 0 to 255, and the range of the transformed gray value is also 0 to 255, so that when the gray value is 0, the original gray value corresponds to the origin (0, 0), that is, when the original gray value is 0, the transformed gray value is 0, and when the gray value 255 corresponds to the maximum value point (255 ), that is, when the original gray value is 255, the transformed gray value is 255. Therefore, the function expression of the gray values of the two end points of each gray value interval corresponding to the line segment where the coordinate point is located can be used as the linear transformation function corresponding to the gray value interval.
Therefore, the linear transformation function corresponding to the first gray value interval is a linear equation with the origin corresponding to the line segment where the first control point is located. And the linear transformation function corresponding to the second gray value interval is a linear equation corresponding to a line segment where the first control point and the second control point are located. And the linear transformation function corresponding to the third gray value interval is a linear equation corresponding to the line segment where the second control point and the maximum value point are located.
Therefore, according to the above, the three gray scale value intervals and their corresponding linear transformation functions can be expressed as:
Figure BDA0002836524800000111
wherein: the first grey value interval corresponds to a first function, the second grey value corresponds to a second function, the third grey value corresponds to a third function, gamma represents the original grey value of the pixels of the road image, rho represents the transformed grey value of the pixels of the road image, gamma represents the transformed grey value of the pixels of the road image1Representing a first original grey value, p1Representing a first transformed gray value, gamma2Representing a second original grey value, p2Representing the second transformed gray value.
Therefore, in the embodiment of the application, a road image is obtained in advance under illumination intensities corresponding to various illumination intensity types, the adjustment of the gray value interval is realized by continuously adjusting the coordinates of the first control point and the second control point, the gray value of the pixel is transformed after the linear function corresponding to the gray value interval is correspondingly adjusted, and when the road surface and the lane line in the road image have the maximum contrast, the coordinates of the first control point and the second control point at the moment are set as the coordinates of the first control point and the second control point corresponding to the current illumination intensity type.
And S104, performing linear transformation on the gray value of the pixel by utilizing a linear transformation function corresponding to the gray value interval where the gray value of the pixel is located for each pixel.
Specifically, the gray value of each pixel is input into the linear transformation function corresponding to the gray value interval in which the pixel is located, the transformation gray value of the pixel is obtained through calculation, and the gray value of the pixel is adjusted to the transformation gray value.
Optionally, when the specific implementation of step S103 is the implementation corresponding to fig. 3, a corresponding implementation of step S104 specifically is:
and if the gray value of the pixel is in the first gray value interval, multiplying the gray value of the pixel by the first conversion slope to obtain the gray value of the pixel after linear conversion.
And the first transformation slope is the ratio of the first transformation gray value to the first original gray value.
And if the gray value of the pixel is in the second gray value interval, multiplying the difference value between the gray value of the pixel and the first original gray value by a second conversion slope, and adding the obtained product to the first conversion gray value to obtain the gray value of the pixel after linear conversion.
The second transformation slope is the ratio of the difference value of the second transformation gray value minus the first transformation gray value to the difference value of the second original gray value minus the first original gray value.
And if the gray value of the pixel is in the third gray value interval, multiplying the difference value between the gray value of the pixel and the second original gray value by a third conversion slope, and adding the obtained product to the second conversion gray value to obtain the gray value of the pixel after linear conversion.
The third transformation slope is a ratio of a difference value obtained by subtracting the second transformation gray value from 255 to a difference value obtained by subtracting the second original gray value from 255.
And S105, respectively carrying out gradient calculation on the gray value of each pixel after linear transformation by using the horizontal Sobel operator and the vertical Sobel operator to obtain the gradient corresponding to each pixel.
Specifically, the horizontal sobel operator and the vertical sobel operator are discrete difference operators, and are two matrixes with 3 rows and 3 columns. Specifically, the method is used for carrying out plane convolution calculation on the image, so that the brightness difference approximation values in the horizontal direction and the vertical radioactivity can be obtained, and the method can be used for edge detection.
Optionally, a specific implementation method of step S105, as shown in fig. 4, includes the following steps:
s401, respectively substituting the gray value of each pixel after linear transformation into a horizontal Sobel operator and a vertical Sobel operator for calculation to obtain a first calculation result and a second calculation result corresponding to the pixel.
S402, performing evolution operation on the square sum of the first calculation result and the second calculation result corresponding to each pixel respectively to obtain the gradient corresponding to each pixel.
It should be noted that this is only an optional manner, and the difference between the absolute values of the first calculation result and the second calculation result corresponding to each pixel may also be directly calculated to obtain the gradient corresponding to each pixel.
And S106, determining the pixels with the gradient larger than a preset threshold value as the edge points of the lane lines in the road image.
It should be noted that the edge points that are identified are grouped together to be the edge line of the lane line, so that the edge points of the lane line are identified, and the lane line is also identified.
Optionally, in another embodiment of the present application, after the step S106 is executed to obtain the edge point of the lane line in the road image, the following steps may be further executed:
and fitting the unitary cubic function by a least square method based on the position index values of the plurality of edge points to obtain a fitting function of the vehicle running track.
Specifically, the position index values of a plurality of edge points are substituted into a unitary cubic function (y ═ a + bx + cx) as observed values by the least square method2+dx3) And calculating to obtain each parameter of the function, and substituting the parameter into the function to obtain a fitting function of the vehicle running track. It should be noted that the specific fitting process is the prior art, and is not described herein again.
The embodiment of the application provides a lane line identification method, which includes the steps of obtaining a road image, calculating the mean value of gray values of all pixels in the road image to obtain the mean value of the gray values of the road image, classifying the road image from the illumination intensity based on the mean value of the gray values of the road image, and further adaptively performing gray value conversion on the road image. The method comprises the steps of determining a plurality of gray value intervals corresponding to target illumination intensity types and a linear transformation function corresponding to each gray value interval, performing linear transformation on the gray values of pixels by using the linear transformation function corresponding to the gray value interval where the gray values of the pixels are located for each pixel, so that the contrast between a lane line and a road surface is effectively enhanced, finally performing gradient calculation on the gray values of each pixel after the linear transformation by using a horizontal Sobel operator and a vertical Sobel operator respectively, obtaining the gradient corresponding to each pixel, determining the pixels with the gradients larger than a preset threshold value as edge points of the lane line in a road image, namely identifying the lane line, and accordingly effectively identifying the lane line in the road image under different illumination conditions.
Another embodiment of the present application provides a lane line identification apparatus, as shown in fig. 5, including the following units:
an acquiring unit 501 is used for acquiring a road image.
The mean value calculating unit 502 is configured to perform mean value calculation on the gray values of all pixels in the road image to obtain a gray mean value of the road image.
The classifying unit 503 is configured to match a target illumination intensity type corresponding to the mean grayscale value of the road image from a plurality of preset illumination intensity types.
An interval determining unit 504 is configured to determine a plurality of gray value intervals corresponding to the target illumination intensity types and a linear transformation function corresponding to each gray value interval.
The gray value interval is obtained when the lane line and the road surface have the maximum contrast after linear transformation through multiple times of adjustment.
The gray scale conversion unit 505 is configured to perform linear conversion on the gray scale value of the pixel by using a linear conversion function corresponding to the gray scale value interval in which the gray scale value of the pixel is located, for each pixel.
And the gradient calculating unit 506 is configured to perform gradient calculation on the gray value of each pixel after the linear transformation by using the horizontal sobel operator and the vertical sobel operator, so as to obtain a gradient corresponding to each pixel.
An edge determining unit 507, configured to determine a pixel with a gradient greater than a preset threshold as an edge point of a lane line in the road image.
Optionally, an average calculating unit in the lane line identification apparatus according to another embodiment of the present application is shown in fig. 6, and includes:
the first calculating unit 601 is configured to calculate a ratio between the number of pixels corresponding to each gray value and the total number of pixels in the road image, so as to obtain a probability value corresponding to each gray value.
The second calculating unit 602 is configured to calculate a product of each gray value and its corresponding probability value, and obtain a weighted average value corresponding to each gray value.
The third calculating unit 603 is configured to calculate a sum of weighted averages corresponding to all gray values to obtain a gray average of the road image.
Optionally, an interval determining unit in the lane line identification apparatus according to another embodiment of the present application is shown in fig. 7, and includes:
a control point determining unit 701, configured to determine coordinates of a first control point and coordinates of a second control point corresponding to the target illumination intensity type.
The coordinates of the first control point comprise a first original gray value and a first conversion gray value; the coordinates of the second control point comprise a second original gray value and a second conversion gray value; the first original gray value is smaller than the second original gray value, and the first converted gray value is smaller than the second converted gray value.
The interval dividing unit 702 is configured to determine three gray value intervals corresponding to the target illumination intensity type according to the first original gray value and the second original gray value.
The gray value interval comprises a first gray value interval, a second gray value interval and a third gray value interval; the first gray value interval is greater than or equal to 0 and smaller than the first original gray value; the second gray value interval is greater than or equal to the first original gray value and smaller than the second original gray value; the third gray value interval is greater than or equal to the second original gray value and is less than or equal to 255.
A function determining unit 703, configured to determine, according to the coordinates of the first control point and the coordinates of the second control point, linear transformation functions corresponding to the first gray value interval, the second gray value interval, and the third gray value interval, respectively; the linear transformation function corresponding to the first gray value interval is a linear equation corresponding to the origin and the line segment where the first control point is located; the linear transformation function corresponding to the second gray value interval is a linear equation corresponding to a line segment where the first control point and the second control point are located; the linear transformation function corresponding to the third gray value interval is a linear equation corresponding to the line segment where the second control point and the maximum value point are located; the coordinate values of the maximum point are all 255.
In the embodiment, the gray scale conversion unit performs, for each pixel, a linear conversion function corresponding to a gray scale value interval in which the gray scale value of the pixel is located, and when performing the linear conversion on the gray scale value of the pixel, is configured to:
and if the gray value of the pixel is in the first gray value interval, multiplying the gray value of the pixel by the first conversion slope to obtain the gray value of the pixel after linear conversion.
The first conversion slope is a ratio of the first conversion gray value to the first original gray value.
And if the gray value of the pixel is in the second gray value interval, multiplying the difference value between the gray value of the pixel and the first original gray value by a second conversion slope, and adding the obtained product to the first conversion gray value to obtain the gray value of the pixel after linear conversion.
The second conversion slope is the ratio of the difference value of the second conversion gray value minus the first conversion gray value to the difference value of the second original gray value minus the first original gray value.
And if the gray value of the pixel is in the third gray value interval, multiplying the difference value between the gray value of the pixel and the second original gray value by a third conversion slope, and adding the obtained product to the second conversion gray value to obtain the gray value of the pixel after linear conversion.
The third transformation slope is a ratio of a difference value obtained by subtracting the second transformation gray value from 255 to a difference value obtained by subtracting the second original gray value from 255.
Optionally, in a lane line identification apparatus provided in another embodiment of the present application, the gradient calculation unit includes:
and the fourth calculation unit is used for substituting the gray value of the pixel into the horizontal Sobel operator and the vertical Sobel operator respectively for calculation aiming at each pixel after linear transformation to obtain a first calculation result and a second calculation result corresponding to the pixel.
And the fifth calculation unit is used for respectively carrying out square root operation on the square sum of the first calculation result and the second calculation result corresponding to each pixel to obtain the gradient corresponding to each pixel.
Optionally, in the lane line identification apparatus provided in another embodiment of the present application, the lane line identification apparatus may further include:
and the fitting unit is used for fitting the unitary cubic function by a least square method based on the position index values of the plurality of edge points to obtain a fitting function of the vehicle driving track.
It should be noted that, for the specific working processes of each unit provided in the foregoing embodiments of the present application, reference may be made to the specific implementation of the corresponding step in the foregoing method embodiments, and details are not described here again.
Another embodiment of the present application provides an electronic device, as shown in fig. 8, including:
a memory 801 and a processor 802.
The memory 801 is used for storing programs, and the processor 802 is used for executing the programs stored in the memory 801, and when the programs are executed, the programs are specifically used for implementing the lane line identification method provided by any one of the above embodiments.
A third aspect of the present application provides a storage medium for storing a computer program, which when executed, is configured to implement the lane line identification method provided in any one of the above embodiments.
It should be noted that the storage medium provided in the embodiments of the present application is a computer storage medium. Computer storage media, including permanent and non-permanent, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include transitory computer readable media (transmyedia) such as modulated data signals and carrier waves.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A lane line identification method is characterized by comprising the following steps:
acquiring a road image, and performing mean value calculation on gray values of all pixels in the road image to obtain a gray mean value of the road image;
matching a target illumination intensity type corresponding to the gray level mean value of the road image from a plurality of preset illumination intensity types;
determining a plurality of gray value intervals corresponding to the target illumination intensity type and a linear transformation function corresponding to each gray value interval; the gray value interval is obtained when the lane line and the road surface have the maximum contrast after linear transformation through multiple times of adjustment in advance;
for each pixel, performing linear transformation on the gray value of the pixel by using a linear transformation function corresponding to the gray value interval in which the gray value of the pixel is located;
respectively carrying out gradient calculation on the gray value of each pixel after linear transformation by using a horizontal Sobel operator and a vertical Sobel operator to obtain a gradient corresponding to each pixel;
and determining the pixels with the gradient larger than a preset threshold value as the edge points of the lane lines in the road image.
2. The method according to claim 1, wherein the calculating the mean of the gray values of all the pixels in the road image to obtain the mean of the gray values of the road image comprises:
calculating the ratio of the number of the pixels corresponding to each gray value to the total number of the pixels of the road image to obtain a probability value corresponding to each gray value;
calculating the product of each gray value and the corresponding probability value of the gray value to obtain a weighted average value corresponding to each gray value;
and calculating the sum of the weighted average values corresponding to all the gray values to obtain the gray average value of the road image.
3. The method of claim 1, wherein determining a plurality of gray value intervals corresponding to the target illumination intensity type and a linear transformation function corresponding to each gray value interval comprises:
determining the coordinates of a first control point and the coordinates of a second control point corresponding to the target illumination intensity type; the coordinates of the first control point comprise a first original gray value and a first conversion gray value; the coordinates of the second control point comprise a second original gray value and a second transformed gray value; the first original gray value is smaller than the second original gray value, and the first converted gray value is smaller than the second converted gray value;
determining three gray value intervals corresponding to the target illumination intensity type according to the first original gray value and the second original gray value; the gray value interval comprises a first gray value interval, a second gray value interval and a third gray value interval; the first gray value interval is greater than or equal to 0 and smaller than the first original gray value; the second gray value interval is greater than or equal to the first original gray value and smaller than the second original gray value; the third gray value interval is greater than or equal to the second original gray value and is less than or equal to 255;
respectively determining linear transformation functions corresponding to the first gray value interval, the second gray value interval and the third gray value interval according to the coordinates of the first control point and the coordinates of the second control point; the linear transformation function corresponding to the first gray value interval is a linear equation corresponding to the origin and the line segment where the first control point is located; the linear transformation function corresponding to the second gray value interval is a linear equation corresponding to a line segment where the first control point and the second control point are located; the linear transformation function corresponding to the third gray value interval is a linear equation corresponding to the line segment where the second control point and the maximum value point are located; the coordinate values of the maximum point are all 255.
4. The method according to claim 3, wherein for each of the pixels, performing linear transformation on the gray-scale value of the pixel by using a linear transformation function corresponding to the gray-scale value interval in which the gray-scale value of the pixel is located comprises:
if the gray value of the pixel is within the first gray value interval, multiplying the gray value of the pixel by a first conversion slope to obtain the gray value of the pixel after linear conversion; wherein the first transformation slope is a ratio of the first transformation gray value to the first original gray value;
if the gray value of the pixel is in the second gray value interval, multiplying the difference value between the gray value of the pixel and the first original gray value by a second conversion slope, and adding the obtained product to the first conversion gray value to obtain the gray value of the pixel after linear conversion; the second transformation slope is the ratio of the difference value of the second transformation gray value minus the first transformation gray value to the difference value of the second original gray value minus the first original gray value;
if the gray value of the pixel is within the third gray value interval, multiplying the difference value between the gray value of the pixel and the second original gray value by a third conversion slope, and adding the obtained product to the second conversion gray value to obtain the gray value of the pixel after linear conversion; the third transformation slope is a ratio of a difference value obtained by subtracting the second transformation gray value from 255 to a difference value obtained by subtracting the second original gray value from 255.
5. The method according to claim 1, wherein the performing gradient calculation on the gray-level value of each pixel after linear transformation by using a horizontal sobel operator and a vertical sobel operator to obtain a gradient corresponding to each pixel comprises:
respectively substituting the gray value of each pixel after linear transformation into the horizontal Sobel operator and the vertical Sobel operator to carry out calculation so as to obtain a first calculation result and a second calculation result corresponding to the pixel;
and performing an evolution operation on the square sum of the first calculation result and the second calculation result corresponding to each pixel to obtain a gradient corresponding to each pixel.
6. A lane line identification apparatus, comprising:
an acquisition unit configured to acquire a road image;
the mean value calculation unit is used for carrying out mean value calculation on the gray values of all pixels in the road image to obtain a gray mean value of the road image;
the classification unit is used for matching a target illumination intensity type corresponding to the gray average value of the road image from a plurality of preset illumination intensity types;
the interval determining unit is used for determining a plurality of gray value intervals corresponding to the target illumination intensity types and a linear transformation function corresponding to each gray value interval; the gray value interval is obtained when the lane line and the road surface have the maximum contrast after linear transformation through multiple times of adjustment in advance;
the gray level conversion unit is used for carrying out linear conversion on the gray level value of the pixel by utilizing a linear conversion function corresponding to the gray level value interval where the gray level value of the pixel is located for each pixel;
the gradient calculation unit is used for respectively carrying out gradient calculation on the gray value of each pixel after linear transformation by utilizing a horizontal Sobel operator and a vertical Sobel operator to obtain the gradient corresponding to each pixel;
and the edge determining unit is used for determining the pixels with the gradient larger than a preset threshold value as the edge points of the lane lines in the road image.
7. The apparatus of claim 6, wherein the mean calculating unit comprises:
the first calculation unit is used for calculating the ratio of the number of the pixels corresponding to each gray value to the total number of the pixels of the road image to obtain the probability value corresponding to each gray value;
the second calculation unit is used for calculating the product of each gray value and the corresponding probability value thereof to obtain a weighted average value corresponding to each gray value;
and the third calculating unit is used for calculating the sum of the weighted average values corresponding to all the gray values to obtain the gray average value of the road image.
8. The apparatus of claim 6, wherein the section determining unit comprises:
the control point determining unit is used for determining the coordinates of the first control point and the coordinates of the second control point corresponding to the target illumination intensity type; the coordinates of the first control point comprise a first original gray value and a first conversion gray value; the coordinates of the second control point comprise a second original gray value and a second transformed gray value; the first original gray value is smaller than the second original gray value, and the first converted gray value is smaller than the second converted gray value;
the interval dividing unit is used for determining three gray value intervals corresponding to the target illumination intensity type according to the first original gray value and the second original gray value; the gray value interval comprises a first gray value interval, a second gray value interval and a third gray value interval; the first gray value interval is greater than or equal to 0 and smaller than the first original gray value; the second gray value interval is greater than or equal to the first original gray value and smaller than the second original gray value; the third gray value interval is greater than or equal to the second original gray value and is less than or equal to 255;
a function determining unit, configured to determine, according to the coordinates of the first control point and the coordinates of the second control point, linear transformation functions corresponding to the first gray value interval, the second gray value interval, and the third gray value interval, respectively; the linear transformation function corresponding to the first gray value interval is a linear equation corresponding to the origin and the line segment where the first control point is located; the linear transformation function corresponding to the second gray value interval is a linear equation corresponding to a line segment where the first control point and the second control point are located; the linear transformation function corresponding to the third gray value interval is a linear equation corresponding to the line segment where the second control point and the maximum value point are located; the coordinate values of the maximum point are all 255.
9. A storage medium storing a computer program for implementing the lane line identification method according to any one of claims 1 to 5 when the computer program is executed.
10. An electronic device, comprising:
a memory and a processor;
wherein the memory is used for storing programs;
the processor is configured to execute the program, and when the program is executed, the program is specifically configured to implement the lane line identification method according to any one of claims 1 to 5.
CN202011478436.3A 2020-12-15 2020-12-15 Lane line recognition method and apparatus, storage medium, and electronic device Pending CN112418187A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011478436.3A CN112418187A (en) 2020-12-15 2020-12-15 Lane line recognition method and apparatus, storage medium, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011478436.3A CN112418187A (en) 2020-12-15 2020-12-15 Lane line recognition method and apparatus, storage medium, and electronic device

Publications (1)

Publication Number Publication Date
CN112418187A true CN112418187A (en) 2021-02-26

Family

ID=74775148

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011478436.3A Pending CN112418187A (en) 2020-12-15 2020-12-15 Lane line recognition method and apparatus, storage medium, and electronic device

Country Status (1)

Country Link
CN (1) CN112418187A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113657333A (en) * 2021-08-23 2021-11-16 深圳科卫机器人科技有限公司 Alert line identification method and device, computer equipment and storage medium
CN114170209A (en) * 2021-12-14 2022-03-11 北京柏惠维康科技有限公司 Method and device for determining gradient features in image and spine surgery robot

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140125801A1 (en) * 2012-03-16 2014-05-08 Tongji University On-line tunnel deformation monitoring system based on image analysis and its application
CN104392212A (en) * 2014-11-14 2015-03-04 北京工业大学 Method for detecting road information and identifying forward vehicles based on vision
CN106097351A (en) * 2016-06-13 2016-11-09 西安邮电大学 A kind of based on multiobject adaptive threshold image partition method
CN107392139A (en) * 2017-07-18 2017-11-24 海信集团有限公司 A kind of method for detecting lane lines and terminal device based on Hough transformation
CN107463918A (en) * 2017-08-17 2017-12-12 武汉大学 Lane line extracting method based on laser point cloud and image data fusion
CN108052904A (en) * 2017-12-13 2018-05-18 辽宁工业大学 The acquisition methods and device of lane line
CN109961409A (en) * 2019-02-26 2019-07-02 平安科技(深圳)有限公司 A kind of method and device of linear enhancing picture contrast
CN111046795A (en) * 2019-12-12 2020-04-21 扬州后潮科技有限公司 Binocular vision-based real-time vehicle line pressing behavior detection method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140125801A1 (en) * 2012-03-16 2014-05-08 Tongji University On-line tunnel deformation monitoring system based on image analysis and its application
CN104392212A (en) * 2014-11-14 2015-03-04 北京工业大学 Method for detecting road information and identifying forward vehicles based on vision
CN106097351A (en) * 2016-06-13 2016-11-09 西安邮电大学 A kind of based on multiobject adaptive threshold image partition method
CN107392139A (en) * 2017-07-18 2017-11-24 海信集团有限公司 A kind of method for detecting lane lines and terminal device based on Hough transformation
CN107463918A (en) * 2017-08-17 2017-12-12 武汉大学 Lane line extracting method based on laser point cloud and image data fusion
CN108052904A (en) * 2017-12-13 2018-05-18 辽宁工业大学 The acquisition methods and device of lane line
CN109961409A (en) * 2019-02-26 2019-07-02 平安科技(深圳)有限公司 A kind of method and device of linear enhancing picture contrast
CN111046795A (en) * 2019-12-12 2020-04-21 扬州后潮科技有限公司 Binocular vision-based real-time vehicle line pressing behavior detection method

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
NAN MA: "《An All-Weather Lane Detection System Based on Simulation Interaction Platform》", 《IEEE ACCESS》, 27 December 2018 (2018-12-27), pages 46121 - 46130 *
何斌等: "《Visual C++数字图像处理》", 中央民族大学出版社, pages: 134 - 145 *
吴有富, 中央民族大学出版社 *
王旭宸: "《车道线检测算法研究》", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》 *
王旭宸: "《车道线检测算法研究》", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》, 15 July 2020 (2020-07-15), pages 035 - 322 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113657333A (en) * 2021-08-23 2021-11-16 深圳科卫机器人科技有限公司 Alert line identification method and device, computer equipment and storage medium
CN113657333B (en) * 2021-08-23 2024-01-12 深圳科卫机器人科技有限公司 Guard line identification method, guard line identification device, computer equipment and storage medium
CN114170209A (en) * 2021-12-14 2022-03-11 北京柏惠维康科技有限公司 Method and device for determining gradient features in image and spine surgery robot

Similar Documents

Publication Publication Date Title
CN100371676C (en) Method and device for quick high precision positioning light spot image mass center
CN106546263B (en) A kind of laser leveler shoot laser line detecting method based on machine vision
CN107507226B (en) Image matching method and device
CN112418187A (en) Lane line recognition method and apparatus, storage medium, and electronic device
CN107292879B (en) A kind of sheet metal surface method for detecting abnormality based on image analysis
CN107609510B (en) Positioning method and device for lower set of quayside container crane
CN102998664B (en) Method and device for identifying water bloom on basis of synthetic aperture radar
CN111626277A (en) Vehicle tracking method and device based on over-station inter-modulation index analysis
CN110515092B (en) Plane touch method based on laser radar
CN115063618B (en) Defect positioning method, system, equipment and medium based on template matching
CN111325717A (en) Mobile phone defect position identification method and equipment
CN114814758B (en) Camera-millimeter wave radar-laser radar combined calibration method and device
CN111553914A (en) Vision-based goods detection method and device, terminal and readable storage medium
CN113409271B (en) Method, device and equipment for detecting oil stain on lens
CN109472772B (en) Image stain detection method, device and equipment
CN105405120A (en) Method extracting cloud graph from sky image
CN115205165B (en) Spraying method of anticorrosive material for industrial machine housing
CN115760653A (en) Image correction method, device, equipment and readable storage medium
CN115690431A (en) Bar code image binarization method and device, storage medium and computer equipment
CN113408538A (en) SVM-based radar RD image weak target detection method and system, storage medium and electronic terminal
Yin et al. Learning based visibility measuring with images
CN113218361B (en) Camera ranging method and device
CN117197783B (en) Intelligent perception-based data analysis system for automobile data recorder
CN116500042B (en) Defect detection method, device, system and storage medium
CN116416251B (en) Method and related device for detecting quality of whole-core flame-retardant conveying belt based on image processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210226