CN108090401B - Line detection method and line detection apparatus - Google Patents

Line detection method and line detection apparatus Download PDF

Info

Publication number
CN108090401B
CN108090401B CN201611037142.0A CN201611037142A CN108090401B CN 108090401 B CN108090401 B CN 108090401B CN 201611037142 A CN201611037142 A CN 201611037142A CN 108090401 B CN108090401 B CN 108090401B
Authority
CN
China
Prior art keywords
line
model
line model
determining
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201611037142.0A
Other languages
Chinese (zh)
Other versions
CN108090401A (en
Inventor
贺娜
刘殿超
师忠超
王刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ricoh Co Ltd
Original Assignee
Ricoh Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ricoh Co Ltd filed Critical Ricoh Co Ltd
Priority to CN201611037142.0A priority Critical patent/CN108090401B/en
Publication of CN108090401A publication Critical patent/CN108090401A/en
Application granted granted Critical
Publication of CN108090401B publication Critical patent/CN108090401B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The present disclosure relates to a model-based line detection method and a line detection apparatus. The line detection method comprises the following steps: extracting line features from an input current frame image; performing initialization of a line model based on the extracted line features; updating the line model based on the extracted line features and the initialized line model; and determining the detected line according to the updated line model. According to the line detecting method and the line detecting apparatus of the present disclosure, it is possible to overcome the influence of noise on the lane line detection and to significantly save time and processing overhead with respect to the conventional detecting method of detecting the lane line and the road mark, respectively.

Description

Line detection method and line detection apparatus
Technical Field
The present disclosure relates to the field of image processing, and more particularly, to a line detection method and a line detection apparatus based on a model.
Background
Line detection technology has wide application in the field of image processing, for example, lane line detection is an important application of line detection technology.
When lane marking detection is performed, road markings (such as arrows, zebra crossings, etc.) on the road surface will affect the lane marking detection accuracy. In order to overcome the influence of road marking on lane line detection, current solutions include a pre-processing mode and a post-processing mode. In the pre-processing mode, road sign detection and identification are completely independent processes from lane line detection, and such two independent image processing processes result in an increase in time and processing overhead. In the post-processing mode, the road mark is removed from the image to be detected based on the difference between the road mark and the lane line, and such an image processing process is difficult and is susceptible to noise in the image.
Accordingly, it is desirable to provide a more robust and efficient line detection method and line detection apparatus that can overcome the effects of noise on lane line detection and significantly save time and processing overhead relative to conventional detection methods that detect lane lines and road markings, respectively.
Disclosure of Invention
In view of the above, the present disclosure provides a line detection method and a line detection apparatus based on direct detection of a lane line model from which a road marking is removed.
According to an embodiment of the present disclosure, there is provided a line detection method including: extracting line features from an input current frame image; performing initialization of a line model based on the extracted line features; updating the line model based on the extracted line features and the initialized line model; and determining the detected line according to the updated line model.
Further, a line detection method according to an embodiment of the present disclosure, wherein the input current frame image includes a parallax image and a grayscale image, the line feature includes a feature point and a feature line segment, and the feature line segment is obtained by fitting the feature point.
Further, according to a line detecting method of an embodiment of the present disclosure, wherein the performing initialization of a line model based on the extracted line feature includes: acquiring the extracted line features; obtaining a predetermined line model; randomly selecting the line characteristics, and calculating the model parameters, the number of support points and the cost function of the preset line model; and determining a line model satisfying a predetermined condition and having model parameters that maximize the number of support points and minimize the cost as the initialized line model.
Further, a line detection method according to an embodiment of the present disclosure, wherein the determining that the predetermined condition is satisfied includes: determining that each line based on the line model satisfies a mutual matching condition of different regions; and determining that each line based on the line model satisfies a width constraint.
Further, a line detection method according to an embodiment of the present disclosure, wherein the determining that each line based on the line model satisfies a mutual matching condition of different regions includes: dividing each line based on the line model into a plurality of regions; calculating the number of feature points that said each line has in each of said plurality of regions; for each line, determining the number of regions of the region in which the number of feature points is greater than a predetermined feature point number threshold; and determining that the lines of which the number of regions is greater than a predetermined region number threshold satisfy a mutual matching condition of different regions.
Further, a line detection method according to an embodiment of the present disclosure, wherein the determining that the respective lines based on the line model satisfy the width constraint condition includes: performing inverse perspective transformation on a feature image composed of the extracted line features to obtain an inverse perspective transformation image; calculating a width to a predetermined feature point for each line in the inverse perspective transformed image to obtain a width histogram; it is determined that a line having a peak value greater than a predetermined peak value threshold satisfies a width limitation condition.
Further, according to a line detecting method of an embodiment of the present disclosure, wherein the updating the line model based on the extracted line feature and the initialized line model includes: based on the extracted line features, a gradient descent or a Gaussian-Newton method is performed to update model parameters of the line model.
According to another embodiment of the present disclosure, there is provided a line detecting apparatus including: a feature extraction unit configured to extract line features from an input current frame image; an initialization unit configured to perform initialization of a line model based on the extracted line features; an updating unit configured to update the line model based on the extracted line features and the initialized line model; and a detection unit configured to determine a detected line according to the updated line model.
Further, a line detection apparatus according to another embodiment of the present disclosure, wherein the initialization unit is further configured to acquire the extracted line feature; obtaining a predetermined line model; randomly selecting the line characteristics, and calculating the model parameters, the number of support points and the cost function of the preset line model; and determining a line model satisfying a predetermined condition and having model parameters that maximize the number of support points and minimize the cost as the initialized line model.
Further, a line detection apparatus according to another embodiment of the present disclosure, wherein the initialization unit is further configured to determine that each line based on the line model satisfies a mutual matching condition of different regions; and determining that each line based on the line model satisfies a width constraint.
According to the line detection method and the line detection apparatus of the embodiment of the present disclosure, direct detection is performed based on the lane line model from which the road marking is removed. More specifically, in the initialization process of the lane line model for detection, by calculating lane line model parameters using specific conditions that should be satisfied by a real lane line and performing direct detection using the acquired lane line model, the influence of noise such as a road marking on the detection of the lane line is removed and time and processing overhead are significantly saved with respect to the conventional detection method of detecting the lane line and the road marking, respectively.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and are intended to provide further explanation of the claimed technology.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent by describing in more detail embodiments of the present invention with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 is a flow chart illustrating a line detection method according to an embodiment of the present disclosure;
fig. 2 is a flowchart further illustrating a line feature extraction process in a line detection method according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating extracted line features;
FIG. 4 is a flow diagram further illustrating a line model initialization process in a line detection method according to an embodiment of the present disclosure;
FIG. 5 is a diagram illustrating one example of a predetermined line model;
FIG. 6 is a flow diagram further illustrating a near-far region matching sub-process in a line model initialization process, according to an embodiment of the present disclosure;
FIG. 7 is a diagram illustrating a near-far region matching sub-process;
FIG. 8 is a flow diagram further illustrating a line width limiting sub-process in the line model initialization process, according to an embodiment of the present disclosure;
FIGS. 9A and 9B are schematic diagrams illustrating an inverse perspective transformation in the line width limiting sub-process;
FIG. 10 is a diagram illustrating width histogram filtering in the line width limit subprocess;
fig. 11 is a functional configuration block diagram illustrating a line detection apparatus according to an embodiment of the present disclosure;
FIG. 12 is an overall hardware block diagram illustrating a line detection system according to an embodiment of the present disclosure; and
fig. 13 is a block diagram illustrating a configuration of a line detection apparatus according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail below with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of embodiments of the invention and not all embodiments of the invention, with the understanding that the invention is not limited to the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the invention described in the present disclosure without inventive step, shall fall within the scope of protection of the invention.
Hereinafter, preferred embodiments of the present disclosure will be described in detail with reference to the accompanying drawings.
Fig. 1 is a flow chart illustrating a line detection method according to an embodiment of the present disclosure. A line detection method according to an embodiment of the present disclosure includes the following steps.
In step S101, line features are extracted from the input current frame image.
In particular, in one embodiment of the present disclosure, the line characteristic of the line to be detected may be any characteristic capable of characterizing the line. For example, it may include, but is not limited to, features of the line's color, gray scale, shape, edges, parallax, or any combination of these features. Any suitable way of extracting line features that correspond to the line to be detected may be used. Hereinafter, the line feature extraction process will be described further with reference to fig. 2 and 3. Thereafter, the process proceeds to step S102.
In step S102, based on the extracted line features, initialization of a line model is performed.
Specifically, in one embodiment of the present disclosure, taking the example that the line to be detected is a lane line, accordingly, the line model is a lane line model. There are a variety of existing lane line models including linear models, isolated point models, parabolic models and extensions thereof, hyperbolic models, clothoid models, spline models, Snake models, 3D models, and so forth. In one embodiment of the present disclosure, one of a plurality of existing lane line models is selected, and model initialization based on a random sample consensus (RANSAC) algorithm is performed. Specifically, for a selected predetermined lane line model, the line feature is randomly selected, and model parameters, the number of support points, and a cost function of the predetermined line model are calculated, and finally a line model satisfying a predetermined condition and having model parameters that maximize the number of support points and minimize the cost is determined as the initialized line model. In a specific embodiment of the present disclosure, the predetermined conditions include, but are not limited to, a near-far region matching condition and a line width restriction condition, by which an influence of noise such as road marking on the line model is excluded. Hereinafter, the initialization process of the line model and the near-far region matching sub-process and the line width limiting sub-process in the line model initialization process will be described further with reference to fig. 4 to 10. Thereafter, the process proceeds to step S103.
In step S103, the line model is updated based on the extracted line features and the initialized line model.
Specifically, in one embodiment of the present disclosure, a gradient descent or a gauss-newton method is performed to update the model parameters of the line model based on the extracted line features. Thereafter, the process proceeds to step S104.
In step S104, the detected line is determined according to the updated line model.
Specifically, the updated line model obtained by the above-described processing describes each line in the current frame image, and therefore the line to be detected in the current frame image can be directly obtained from the updated line model.
The line detection method according to an embodiment of the present disclosure is outlined above with reference to fig. 1. The line detection method according to the embodiment of the present disclosure described with reference to fig. 1 enables direct detection based on a line model excluding noise such as road markings. Hereinafter, each processing step in the line detection method according to the embodiment of the present disclosure will be described in detail further with reference to the drawings.
Fig. 2 is a flowchart further illustrating a line feature extraction process in a line detection method according to an embodiment of the present disclosure; fig. 3 is a schematic diagram illustrating line features extracted by the line feature extraction process of fig. 2.
As shown in fig. 2, the line feature extraction process in the line detection method according to the embodiment of the present disclosure includes the following steps.
In step S201, an input current frame image is acquired. In one embodiment of the present disclosure, the input current frame image includes a parallax image and a grayscale image. Thereafter, the process proceeds to step S202.
In step S202, feature points and feature line segments are extracted. In one embodiment of the present disclosure, feature points or feature line segments may be used as line feature outputs. As one example, a line segment conforming to a line feature of a line to be detected may be directly detected by a straight line detection method such as hough transform in a current frame image obtained by photographing. As another example, feature points may be first detected in the current frame image by a detection method corresponding to a feature of a line to be detected based on the feature of the line (for example, if the feature of the line to be detected is an edge feature, the feature points may be detected by an edge detection method), and then the line segment may be fitted using the detected feature points. Thereafter, the process proceeds to step S203.
In step S203, the extracted line feature is output. In one embodiment of the present disclosure, the extracted feature lines are output for initialization and updating of a line model, which will be described in detail later.
As shown in fig. 3, among the line features 301 extracted by the line feature extraction process shown in fig. 2, features representing lane lines and features representing noise such as road markings are included.
Hereinafter, the initialization process of the line model using the line features 301 extracted by the line feature extraction process shown in fig. 2 is described with reference to fig. 4 to 10.
Fig. 4 is a flow chart further illustrating a line model initialization process in a line detection method according to an embodiment of the present disclosure. As shown in fig. 4, the line model initialization process in the line detection method according to the embodiment of the present disclosure includes the following steps.
In step S121, the extracted line feature is acquired. That is, the line feature 301 extracted by the line feature extraction process shown in fig. 2 is acquired. Thereafter, the process proceeds to step S122.
In step S122, a predetermined line model is acquired. As described above, the predetermined line model may be an existing lane line model including a linear model, an isolated point model, a parabolic model and its extensions, a hyperbolic model, a clothoid model, a spline model, a Snake model, a 3D model, and the like.
Fig. 5 is a diagram illustrating one example of a predetermined line model. The predetermined line model shown in fig. 5 is a polynomial model.
Specifically, as shown in fig. 5, it is assumed that the center line of the current lane is LmidThe left lane line of the current lane is L-1The right lane line is L1The left lane line of the nth lane on the left side is L-nThe left lane line of the nth lane on the right side is Ln(ii) a The lane width of the current lane is w0The lane width of the nth lane on the left side is w-nThe lane width of the nth lane on the right side is wn(ii) a vp represents the ordinate of the position where the lane line disappears; f is the focal length of the lens, and H is the height of the camera.
LmidUsing a polynomial expression as shown in equation (1):
Figure BDA0001159921980000061
the lane line model can be expressed as:
Figure BDA0001159921980000062
wherein,
Figure BDA0001159921980000063
referring back to fig. 4, after a predetermined line model is acquired, the process proceeds to step S123.
In step S123, the line features are randomly selected, and model parameters, the number of support points, and a cost function of the predetermined line model are calculated. That is, in one embodiment of the present disclosure, model initialization based on a random sample consensus (RANSAC) algorithm is performed. Thereafter, the process proceeds to step S124.
In step S124, a line model that satisfies a predetermined condition and has model parameters that maximize the number of support points and minimize the cost is determined as the initialized line model. In one embodiment of the present disclosure, the predetermined conditions include, but are not limited to, a near-far region matching condition and a line width restriction condition, by which the influence of noise such as road marking on the line model is excluded. Hereinafter, the near-far region matching sub-process in the line model initialization process will be described with reference to fig. 6 and 7, and the bandwidth limiting sub-process in the line model initialization process will be described with reference to fig. 8 to 10.
FIG. 6 is a flow diagram further illustrating a near-far region matching sub-process in a line model initialization process, according to an embodiment of the present disclosure; fig. 7 is a diagram illustrating the near-far region matching sub-process.
As shown in fig. 6, the near-far region matching sub-process in the line model initialization process according to the embodiment of the present disclosure includes the following steps.
In step S1241, each line based on the line model is divided into a plurality of regions.
Specifically, as shown in fig. 7, 3 regions are divided in the vertical range in order from far to near from the image capturing apparatus: r1, R2 and R3. It will be readily appreciated that the vertical extent may be divided into other numbers of zones. In one embodiment of the present disclosure, the principle of region division is that the region range at a far position is small and the region range at a near position is relatively large.
Referring back to fig. 6, after a plurality of areas are obtained, the process proceeds to step S1242.
In step S1242, the number of feature points that the each line has in each of the plurality of regions (e.g., n1, n2, …) is calculated. In one embodiment of the present disclosure, the number of feature points may be the number of real feature points or the length of a feature line segment, depending on the type of the extracted line feature. Thereafter, the process advances to step S1243.
In step S1243, for each line, the number of areas of the area in which the feature point number is larger than a predetermined feature point number threshold is determined. In one embodiment of the present disclosure, a predetermined feature point number threshold (e.g., t1, t2, …) is set for each region, and if the number of feature points (e.g., n1) of a line in a region is greater than the predetermined feature point number threshold (e.g., t1) of the region, i.e., n1> t1 is satisfied, the region is a passing region of the line that satisfies the predetermined feature point number threshold. Thereafter, the number of areas s of the passing area in all the areas of each line is counted. Thereafter, the process advances to step S1244.
In step S1244, it is determined that the lines whose number of areas is greater than the predetermined area number threshold satisfy the mutual matching conditions of the different areas. In one embodiment of the present disclosure, a predetermined region number threshold f (n) is preset, for example, f (n) may be set as 1/2 for the total number of regions. For each line, if the area number of the passing areas in all the areas is larger than the predetermined area number threshold value f (n), namely s > f (n), the line is determined to meet the mutual matching condition of different areas.
As shown in fig. 7, in the case where the total number of regions is 3, the predetermined region number threshold f (n) is 3/2. The leftmost and rightmost lines (real solid lane lines) have a number s of regions of 3, satisfying s > f (n), since they exist continuously in all three regions, i.e., the passing regions to which all three regions belong. That is, the leftmost and rightmost lines are determined to satisfy the mutual matching conditions of the different regions.
Similarly, the middle line (real dashed lane line) has two passing regions (R2 and R3) among the three regions, and thus the number of regions s of its passing region is 2, and s > f (n) is also satisfied. That is, the middle line is also determined to satisfy the mutual matching conditions of the different regions.
Further, since the lines (road marks) on both sides of the middle line have only one passing area (R3) among the three areas, the number of areas s of its passing area is 1, and s > f (n) is not satisfied. That is, lines (road markings) on both sides of the line in the middle are determined not to satisfy the mutual matching conditions of the different areas.
As described above, by describing the far and near area matching sub-process in the line model initialization process with reference to fig. 6 and 7, it is successfully determined that the road marking as the noise does not satisfy the mutual matching condition of the different areas.
FIG. 8 is a flow diagram further illustrating a line width limiting sub-process in the line model initialization process, according to an embodiment of the present disclosure; FIGS. 9A and 9B are schematic diagrams illustrating an inverse perspective transformation in the line width limiting sub-process; fig. 10 is a diagram illustrating width histogram filtering in the line width limiting sub-process.
As shown in fig. 8, the line width limiting sub-process in the line model initialization process according to an embodiment of the present disclosure includes the following steps.
In step S1245, an inverse perspective transformation is performed on the feature image composed of the extracted line features to obtain an inverse perspective transformation image.
Specifically, fig. 9A shows an original image in which line features are extracted by the line feature extraction process shown in fig. 2. Fig. 9B shows an inverse perspective transformed image obtained by performing inverse perspective transformation on the original image.
Referring back to fig. 8, after the inverse perspective transformed image is obtained, the process proceeds to step S1246.
In step S1246, the width to a predetermined feature point is calculated for each line in the inverse perspective transformed image to obtain a width histogram.
Specifically, fig. 10 shows the width histogram. In the histogram shown in fig. 10, the abscissa represents the width, and the ordinate represents the number of feature points existing at a specific width.
Referring back to fig. 8, after the width histogram is obtained, the process proceeds to step S1247.
In step S1427, it is determined that the line having a peak greater than the predetermined peak threshold satisfies the width limitation condition.
Specifically, as shown in fig. 10, the maximum peak in the width histogram is MaxNum, and the other peaks are Num1, Num2, and Num 3. For example, the predetermined peak threshold value f (MaxNum) is set to 1/3 where f (MaxNum) is the maximum peak value, i.e., f (MaxNum) MaxNum/3, it is determined that Num1/Num2> f (MaxNum) satisfies the width restriction condition, and Num3< f (MaxNum) does not satisfy the width restriction condition.
As described above, by the line width limiting sub-process in the line model initialization process described with reference to fig. 8 to 10, it is also successful to determine the road mark as noise as not satisfying the line width limiting condition.
In the above, the line detection method according to the embodiment example of the present disclosure is described. Hereinafter, a line detecting apparatus and a line detecting system using the line detecting method will be further described with reference to the drawings.
Fig. 11 is a functional configuration block diagram illustrating a line detection apparatus according to an embodiment of the present disclosure. As shown in fig. 11, the line detection apparatus 10 may include a feature extraction unit 101, an initialization unit 102, an update unit 103, and a detection unit 104, which may respectively perform the respective steps/functions of the line detection method described above in connection with fig. 1. Therefore, only the main functions of the units of the line detecting apparatus 10 will be described below, and details that have been described above will be omitted.
The feature extraction unit 101 is configured to extract line features from an input current frame image. In particular, the line characteristic of the line to be detected may be any characteristic capable of characterizing the line. For example, it may include, but is not limited to, features of the line's color, gray scale, shape, edges, parallax, or any combination of these features. Any suitable way of extracting line features that correspond to the line to be detected may be used. As one example, the feature extraction unit 101 may directly detect, in a current frame image obtained by shooting, a line segment that conforms to a line feature of a line to be detected by a straight line detection method such as hough transform. As another example, the feature extraction unit 101 may first detect a feature point in the current frame image by a detection method corresponding to a feature based on a line feature of a line to be detected (for example, if the feature of the line to be detected is an edge feature, the feature point may be detected by an edge detection method), and then fit the line segment using the detected feature point.
The initialization unit 102 is configured to perform an initialization of a line model based on the extracted line features. Specifically, the initialization unit 102 randomly selects the line feature for a selected predetermined lane line model (including but not limited to a linear model, an isolated point model, a parabolic model and its extensions, a hyperbolic model, a clothoid model, a spline model, a Snake model, a 3D model, etc.), and calculates a model parameter, a support point number, and a cost function of the predetermined line model, and finally determines a line model satisfying a predetermined condition and having a model parameter that maximizes the support point number and minimizes the cost as the initialized line model. In a specific embodiment of the present disclosure, the predetermined conditions utilized by the initialization unit 102 include, but are not limited to, a near-far region matching condition and a line width limitation condition, and influence of noise such as road marking on a line model is eliminated by the predetermined condition limitation.
The updating unit 103 is configured to update the line model based on the extracted line features and the initialized line model. Specifically, the updating unit 103 performs a gradient descent or a gauss-newton method to update the model parameters of the line model based on the extracted line features.
The detection unit 104 is configured to determine the detected line based on the updated line model.
Fig. 12 is an overall hardware block diagram illustrating a line detection system according to an embodiment of the present disclosure. As shown in fig. 12, the line detection system 20 may include: an input device 201 for inputting relevant images or information from the outside, such as a depth map, a gray scale map (color map), etc. photographed by a camera, the input device 201 may be, for example, a keyboard, a mouse, a camera, etc.; a processing device 202 for implementing the line detection method according to the embodiments of the present disclosure or as the line detection device described above, which may be any device with processing capability capable of implementing the functions described above, for example, it may be a general-purpose processor, a Digital Signal Processor (DSP), an ASIC, a field programmable gate array signal (FPGA) or other Programmable Logic Device (PLD), discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein; a storage device 203 for storing, in a volatile or nonvolatile manner, various volatile or nonvolatile memories such as a Random Access Memory (RAM), a Read Only Memory (ROM), a hard disk, a semiconductor memory, or the like, involved in the above-described line detection process, such as a depth map, a grayscale map (color map), various thresholds, a pre-established line model, an extracted line segment, an updated line model, or the like; and an output device 204 for outputting the result of performing the above-described line detection process, such as a detected line, to the outside, which may be, for example, a display, a printer, or the like.
Fig. 13 is a block diagram illustrating a configuration of a line detection apparatus according to an embodiment of the present disclosure. As shown in fig. 13, the line detection apparatus 130 according to an embodiment of the present disclosure includes a memory 1301 and a processor 1302. Stored on the memory 1301 are computer program instructions which, when executed by the processor 1302, perform the line detection method as described above with reference to fig. 1 to 10.
Above, the line detecting method and the line detecting apparatus according to the embodiments of the present disclosure are described with reference to the drawings. The line detection method and the line detection apparatus perform direct detection based on a lane line model from which a road marking is removed. More specifically, in the initialization process of the lane line model for detection, by calculating lane line model parameters using specific conditions that should be satisfied by a real lane line and performing direct detection using the acquired lane line model, the influence of noise such as a road marking on the detection of the lane line is removed and time and processing overhead are significantly saved with respect to the conventional detection method of detecting the lane line and the road marking, respectively.
The foregoing describes the general principles of the present invention in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in this disclosure are only examples and not limitations, and should not be considered essential to every embodiment of the present invention. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the invention is not limited to the specific details described above.
The block diagrams of devices, apparatuses, systems referred to in this disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
The flowchart of steps in the present disclosure and the above description of the methods are only given as illustrative examples and are not intended to require or imply that the steps of the various embodiments must be performed in the order given, some steps may be performed in parallel, independently of each other or in other suitable orders. Additionally, words such as "thereafter," "then," "next," etc. are not intended to limit the order of the steps; these words are only used to guide the reader through the description of these methods.
Also, as used herein, "or" as used in a list of items beginning with "at least one" indicates a separate list, such that, for example, a list of "A, B or at least one of C" means A or B or C, or AB or AC or BC, or ABC (i.e., A and B and C). Furthermore, the word "exemplary" does not mean that the described example is preferred or better than other examples.
It should also be noted that the components or steps may be broken down and/or re-combined in the apparatus and method of the present invention. These decompositions and/or recombinations are to be regarded as equivalents of the present invention.
It will be understood by those of ordinary skill in the art that all or any portion of the methods and apparatus of the present disclosure may be implemented in any computing device (including processors, storage media, etc.) or network of computing devices, in hardware, firmware, software, or any combination thereof. The hardware may be implemented with a general purpose processor, a Digital Signal Processor (DSP), an ASIC, a field programmable gate array signal (FPGA) or other Programmable Logic Device (PLD), discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. The software may reside in any form of computer readable tangible storage medium. By way of example, and not limitation, such computer-readable tangible storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other tangible medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Disk, as used herein, includes Compact Disk (CD), laser disk, optical disk, Digital Versatile Disk (DVD), floppy disk, and Blu-ray disk.
The intelligent control techniques disclosed herein may also be implemented by running a program or a set of programs on any computing device. The computing device may be a general purpose device as is well known. The disclosed intelligent techniques may also be implemented simply by providing a program product containing program code for implementing the methods or apparatus, or by any storage medium having such a program product stored thereon.
Various changes, substitutions and alterations to the techniques described herein may be made without departing from the techniques of the teachings as defined by the appended claims. Moreover, the scope of the claims of the present disclosure is not limited to the particular aspects of the process, machine, manufacture, composition of matter, means, methods and acts described above. Processes, machines, manufacture, compositions of matter, means, methods, or acts, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding aspects described herein may be utilized. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or acts.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the invention. Thus, the present invention is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the invention to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (6)

1. A line detection method, comprising:
extracting line features from an input current frame image;
performing initialization of a line model based on the extracted line features;
updating the line model based on the extracted line features and the initialized line model; and
determining a detected line according to the updated line model;
wherein said performing initialization of a line model based on said extracted line features comprises:
acquiring the extracted line features;
obtaining a predetermined line model;
randomly selecting the line characteristics, and calculating the model parameters, the number of support points and the cost function of the preset line model; and
determining a line model satisfying a predetermined condition and having model parameters that maximize the number of support points and minimize the cost as the initialized line model;
wherein the determining that the predetermined condition is satisfied comprises:
determining that each line based on the line model satisfies a mutual matching condition of different regions; and
determining that each line based on the line model satisfies a width constraint;
wherein the determining that each line based on the line model satisfies a mutual matching condition of different regions comprises:
dividing each line based on the line model into a plurality of regions;
calculating the number of feature points that said each line has in each of said plurality of regions;
for each line, determining the number of regions of the region in which the number of feature points is greater than a predetermined feature point number threshold; and
and determining that the lines of which the number of the areas is greater than a preset area number threshold value meet the mutual matching conditions of different areas.
2. The line detection method according to claim 1, wherein the input current frame image includes a parallax image and a grayscale image, the line feature includes a feature point and a feature line segment, and the feature line segment is obtained by fitting the feature point.
3. The line detection method of claim 1, wherein the determining that the respective lines based on the line model satisfy a width constraint comprises:
performing inverse perspective transformation on a characteristic image formed by lines extracted based on the line model to obtain an inverse perspective transformation image;
calculating a width to a predetermined feature point for each line in the inverse perspective transformed image to obtain a width histogram;
it is determined that a line having a peak value greater than a predetermined peak value threshold satisfies a width limitation condition.
4. The line detection method of claim 1, wherein the updating the line model based on the extracted line features and the initialized line model comprises:
based on the extracted line features, a gradient descent or a Gaussian-Newton method is performed to update model parameters of the line model.
5. A line detection apparatus comprising:
a feature extraction unit configured to extract line features from an input current frame image;
an initialization unit configured to perform initialization of a line model based on the extracted line features;
an updating unit configured to update the line model based on the extracted line features and the initialized line model; and
a detection unit configured to determine a detected line according to the updated line model;
wherein the initialization unit is further configured to obtain the extracted line feature; obtaining a predetermined line model; randomly selecting the line characteristics, and calculating the model parameters, the number of support points and the cost function of the preset line model; and determining a line model satisfying a predetermined condition and having model parameters that maximize the number of support points and minimize the cost as the initialized line model;
wherein the initialization unit is further configured to determine that each line based on the line model satisfies a mutual matching condition of different regions; and determining that each line based on the line model satisfies a width constraint;
wherein the initialization unit is further configured to:
dividing each line based on the line model into a plurality of regions;
calculating the number of feature points that said each line has in each of said plurality of regions;
for each line, determining the number of regions of the region in which the number of feature points is greater than a predetermined feature point number threshold; and
and determining that the lines of which the number of the areas is greater than a preset area number threshold value meet the mutual matching conditions of different areas.
6. A line detection apparatus comprising:
a processor; and
a memory configured to store computer program instructions;
wherein the line detection method of any one of claims 1 to 4 is performed when the computer program instructions are executed by the processor.
CN201611037142.0A 2016-11-23 2016-11-23 Line detection method and line detection apparatus Expired - Fee Related CN108090401B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611037142.0A CN108090401B (en) 2016-11-23 2016-11-23 Line detection method and line detection apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611037142.0A CN108090401B (en) 2016-11-23 2016-11-23 Line detection method and line detection apparatus

Publications (2)

Publication Number Publication Date
CN108090401A CN108090401A (en) 2018-05-29
CN108090401B true CN108090401B (en) 2021-12-14

Family

ID=62168609

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611037142.0A Expired - Fee Related CN108090401B (en) 2016-11-23 2016-11-23 Line detection method and line detection apparatus

Country Status (1)

Country Link
CN (1) CN108090401B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210451B (en) * 2019-06-13 2022-07-08 重庆邮电大学 Zebra crossing detection method
CN110688971B (en) * 2019-09-30 2022-06-24 上海商汤临港智能科技有限公司 Method, device and equipment for detecting dotted lane line

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104408460A (en) * 2014-09-17 2015-03-11 电子科技大学 A lane line detecting and tracking and detecting method
CN104599502A (en) * 2015-02-13 2015-05-06 重庆邮电大学 Method for traffic flow statistics based on video monitoring
CN105224909A (en) * 2015-08-19 2016-01-06 奇瑞汽车股份有限公司 Lane line confirmation method in lane detection system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592114B (en) * 2011-12-26 2013-07-31 河南工业大学 Method for extracting and recognizing lane line features of complex road conditions
CN102663744B (en) * 2012-03-22 2015-07-08 杭州电子科技大学 Complex road detection method under gradient point pair constraint
CN103617412B (en) * 2013-10-31 2017-01-18 电子科技大学 Real-time lane line detection method
CN105930800B (en) * 2016-04-21 2019-02-01 北京智芯原动科技有限公司 A kind of method for detecting lane lines and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104408460A (en) * 2014-09-17 2015-03-11 电子科技大学 A lane line detecting and tracking and detecting method
CN104599502A (en) * 2015-02-13 2015-05-06 重庆邮电大学 Method for traffic flow statistics based on video monitoring
CN105224909A (en) * 2015-08-19 2016-01-06 奇瑞汽车股份有限公司 Lane line confirmation method in lane detection system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
道路区域分割的车道线检测方法;鲁曼 等;《智能系统学报》;20101231;第5卷(第6期);全文 *

Also Published As

Publication number Publication date
CN108090401A (en) 2018-05-29

Similar Documents

Publication Publication Date Title
CN109829875B (en) Method and apparatus for estimating parallax
US9142011B2 (en) Shadow detection method and device
US10311595B2 (en) Image processing device and its control method, imaging apparatus, and storage medium
JP6299291B2 (en) Road edge detection method and road edge detection device
JP6897335B2 (en) Learning program, learning method and object detector
US9070042B2 (en) Image processing apparatus, image processing method, and program thereof
JP6349742B2 (en) Multi-lane detection method and detection system
CN106887018B (en) Stereo matching method, controller and system
CN104346811B (en) Object real-time tracking method and its device based on video image
JP2016194925A (en) Method and device of detecting road boundary object
JP2011113197A (en) Method and system for image search
US10013618B2 (en) Method and apparatus for detecting side of object using ground boundary information of obstacle
EP2993621B1 (en) Method and apparatus for detecting shielding against object
CN108898148B (en) Digital image corner detection method, system and computer readable storage medium
US10007678B2 (en) Image processing apparatus, image processing method, and recording medium
JP2018088151A (en) Boundary line estimating apparatus
CN108090401B (en) Line detection method and line detection apparatus
CN111178193A (en) Lane line detection method, lane line detection device and computer-readable storage medium
CN110443228B (en) Pedestrian matching method and device, electronic equipment and storage medium
CN110705330A (en) Lane line detection method, lane line detection apparatus, and computer-readable storage medium
CN112889061B (en) Face image quality evaluation method, device, equipment and storage medium
KR101910256B1 (en) Lane Detection Method and System for Camera-based Road Curvature Estimation
CN106157289B (en) Line detecting method and equipment
Ai et al. Geometry preserving active polygon-incorporated sign detection algorithm
CN109636844B (en) Complex desktop point cloud segmentation method based on 3D bilateral symmetry

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20211214