CN105718872B - Auxiliary method and system for rapidly positioning lanes on two sides and detecting vehicle deflection angle - Google Patents

Auxiliary method and system for rapidly positioning lanes on two sides and detecting vehicle deflection angle Download PDF

Info

Publication number
CN105718872B
CN105718872B CN201610030384.0A CN201610030384A CN105718872B CN 105718872 B CN105718872 B CN 105718872B CN 201610030384 A CN201610030384 A CN 201610030384A CN 105718872 B CN105718872 B CN 105718872B
Authority
CN
China
Prior art keywords
image
lane
histogram
vehicle
lane line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610030384.0A
Other languages
Chinese (zh)
Other versions
CN105718872A (en
Inventor
苏晓聪
韩盛
朱敦尧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WUHAN KOTEI TECHNOLOGY Corp
Original Assignee
WUHAN KOTEI TECHNOLOGY Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WUHAN KOTEI TECHNOLOGY Corp filed Critical WUHAN KOTEI TECHNOLOGY Corp
Priority to CN201610030384.0A priority Critical patent/CN105718872B/en
Publication of CN105718872A publication Critical patent/CN105718872A/en
Application granted granted Critical
Publication of CN105718872B publication Critical patent/CN105718872B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an auxiliary method and system for rapidly positioning lanes on two sides and detecting a vehicle deflection angle, wherein an optimized gray level image is obtained through preprocessing, binarization processing is carried out on the gray level image to extract lane lines, images with the lane lines parallel to a road and histograms of the images are corrected based on transverse histogram projection of the images, and the deflection angle of the vehicle at the current moment is obtained; and estimating the number of lane lines through a histogram, and verifying the histogram sample data variance of the number of lane lines, wherein the minimum estimation result of the histogram sample data variance is the number of side lane lines. Compared with the traditional lane positioning method, the lane line positioning obtained by the lane rapid positioning and vehicle deflection angle detection auxiliary method and system of the invention is not affected by most external environments, and the algorithm robustness is high; the lane line positioning speed is high, and the real-time requirement of vehicle-mounted equipment is met; when the lane line is positioned, the included angle between the intelligent vehicle and the lane line is obtained, and a foundation is laid for early warning of subsequent lane departure.

Description

Auxiliary method and system for rapidly positioning lanes on two sides and detecting vehicle deflection angle
Technical Field
The invention relates to the field of traffic information detection in the intelligent driving industry, in particular to an auxiliary method and an auxiliary system for quickly positioning lanes on two sides and detecting a vehicle deflection angle.
Background
The intelligent driving of the automobile is one of the hot topics of artificial intelligence researchers, and the method for positioning the lane based on the GPS depends on positioning accuracy and is easy to interfere, so that effective information cannot be given to a vehicle control system; the method based on radar auxiliary positioning is also influenced by obstacles and nearby passing vehicles, has high cost, provides limited effective information, and is not suitable for accurately positioning the position of the vehicle to guide the vehicle to run.
Aiming at the problem that the condition influences the lane line detection, a large number of domestic scholars propose various methods for assisting the lane positioning based on vision. These methods are broadly classified into two types, color information-based and edge information-based. However, unpredictable conditions such as weather, roadside trees, flowers, shadows, nearby vehicles, road conditions, etc. may all have an impact on the two detection methods.
In addition, in the conventional method based on visual auxiliary positioning, no matter the method based on lane line feature extraction or the method based on lane line color segmentation, a suitable mathematical model needs to be drawn to fit the detected line segment, and certain hardware resources are consumed. Therefore, it is urgently needed to provide a lane line positioning method which is not influenced by most external environments and has high algorithm robustness; and the lane line positioning speed is high.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide an auxiliary method which is free from the influence of most external environments, high in algorithm robustness and high in lane line positioning speed.
An auxiliary method for rapidly positioning and detecting a vehicle deflection angle of two lanes comprises the following steps:
s1, acquiring scene images on two sides of a road in vehicle driving, and preprocessing the acquired images to obtain an optimized gray level image;
s2, extracting lane lines from the gray level image through binarization processing, and correcting the image and the histogram of the lane lines parallel to the road based on the horizontal histogram projection of the image to obtain the deflection angle of the vehicle at the current moment;
and S3, estimating the lane line number according to the histogram, and verifying the histogram sample data variance of the lane line number, wherein the minimum estimation result of the histogram sample data variance is the side lane line number.
An auxiliary system for rapidly positioning and detecting a vehicle deflection angle of two side lanes comprises:
the image processing module is used for acquiring scene images on two sides of a road in vehicle driving and preprocessing the acquired images to obtain optimized gray level images;
the deflection angle acquisition module is used for extracting a lane line from the gray level image through binarization processing, correcting an image with the lane line parallel to a road and a histogram thereof based on transverse histogram projection of the image, and acquiring a deflection angle of the vehicle at the current moment;
and the lane line estimation module is used for estimating the number of lane lines according to the histogram and verifying the histogram sample data variance of the number of lane lines, wherein the minimum estimation result of the histogram sample data variance is the number of side lane lines.
The invention provides an auxiliary method and system for rapidly positioning lanes on two sides and detecting a vehicle deflection angle, wherein an optimized gray level image is obtained through preprocessing, binarization processing is carried out on the gray level image to extract lane lines, images with the lane lines parallel to a road and histograms of the images are corrected based on transverse histogram projection of the images, and the deflection angle of the vehicle at the current moment is obtained; and estimating the number of lane lines through a histogram, and verifying the histogram sample data variance of the number of lane lines, wherein the minimum estimation result of the histogram sample data variance is the number of side lane lines.
The auxiliary method and the system for rapidly positioning the lanes on the two sides and detecting the vehicle deflection angle have the advantages of sufficient algorithm theoretical basis and high feasibility and stability; the system has complete input and output interfaces and can well complete the task of auxiliary positioning. Compared with the traditional method for assisting in positioning the lanes based on vision, the lane line positioning obtained by the auxiliary method and the system for rapidly positioning the lanes on the two sides and detecting the deflection angle of the vehicle is not influenced by most external environments, and the algorithm robustness is high; the lane line positioning speed is high, and the real-time requirement of vehicle-mounted equipment is met; when the lane line is positioned, the included angle between the intelligent vehicle and the lane line is obtained, and a foundation is laid for early warning of subsequent lane departure. The system can complete tasks by using a common camera without expensive radar equipment or image acquisition equipment, has low cost and is suitable for wide application.
Drawings
FIG. 1 is a block flow diagram of an auxiliary method for fast positioning and detecting a vehicle deflection angle of two lanes according to an embodiment of the present invention;
FIG. 2 is a block flow diagram of step S1 of FIG. 1;
FIG. 3 is an aerial view of an image after an inverse perspective transformation process is performed on the image according to an embodiment of the present invention;
FIG. 4 is a block flow diagram of step S2 of FIG. 1;
FIG. 5 is a block diagram of the flow of step S21 in FIG. 4;
FIG. 6 is a binary image after the binarization processing is performed on the image in the embodiment of the invention;
FIG. 7 is a block flow diagram of step S3 of FIG. 1;
FIG. 8 is a schematic illustration of marking lane lines in an image in accordance with an embodiment of the present invention;
FIG. 9 is a block diagram of an auxiliary system for fast positioning and detecting a vehicle yaw angle for two lanes according to an embodiment of the present invention;
FIG. 10 is a sub-functional block diagram of the image processing module of FIG. 9;
FIG. 11 is a block diagram of sub-functional blocks of the deflection angle acquisition module of FIG. 9;
FIG. 12 is a block diagram of the cells of the binarization submodule of FIG. 11;
FIG. 13 is a block diagram of sub-functional modules of the lane line estimation module of FIG. 9;
fig. 14 is another block diagram of the auxiliary system for fast positioning and detecting the vehicle deflection angle of the two lanes according to the embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and embodiments, it being understood that the specific embodiments described herein are merely illustrative of the present invention and are not intended to limit the present invention.
As shown in fig. 1, the present invention provides an auxiliary method for fast positioning two lanes and detecting a vehicle deflection angle, which comprises the following steps:
s1, acquiring scene images on two sides of a road in vehicle driving, and preprocessing the acquired images to obtain an optimized gray level image;
s2, extracting lane lines from the gray level image through binarization processing, and correcting the image and the histogram of the lane lines parallel to the road based on the horizontal histogram projection of the image to obtain the deflection angle of the vehicle at the current moment;
and S3, estimating the lane line number according to the histogram, and verifying the histogram sample data variance of the lane line number, wherein the minimum estimation result of the histogram sample data variance is the side lane line number.
As shown in fig. 2, the step S1 includes the following sub-steps:
s11, carrying out color balance on the acquired image;
s12, performing inverse perspective transformation processing on the image after the color balance processing to change the image from a perspective view to an aerial view;
s13, carrying out gray-scale color processing on the image after the inverse perspective transformation processing;
and S14, carrying out noise suppression and shadow weakening on the image after the color processing.
Specifically, cameras installed under rearview mirrors on two sides of the intelligent driving vehicle collect video streams of road scenes on two sides in real time, image data of a current frame are obtained from the video streams, and the image data are stored in a matrix data structure.
Due to the influence of the camera optics and the external environment, white objects in the obtained frame data often cannot present the correct color. Therefore, the RGB three-color channels of the image are separated, the histograms of the three-color channels are respectively counted, the accumulated histogram is calculated, two threshold values min and max are obtained according to set empirical value parameters, the gray levels larger than max or smaller than min are processed, and then the three-color channels are combined, so that the image which is closer to a white object in the real world can be obtained.
Each frame of image acquired by the camera is imaged through perspective transformation, and the distance of the lane lines on the image is not strictly consistent or close, so that the image needs to be subjected to inverse perspective transformation processing, and an overhead view as shown in fig. 3 is obtained.
In order to reduce the difficulty of picture processing, the original scene color image needs to be grayed, only the brightness information in the image is retained, and specifically, the graying can be performed according to a general color RGB image graying method, and assuming that the pixel value at a certain pixel point on the RGB color three-channel image is (r, g, b), the pixel value at the point during graying is 0.299 r +0.587 g +0.114 b.
Further, Gaussian filtering with the operator size of 9X9 is performed on the obtained image, opening operation with the operator size of 5X5 is performed on the image, small isolated points in the image are removed, edge burrs are smoothed, and therefore an optimized gray scale image is obtained.
As shown in fig. 4, the step S2 includes the following sub-steps:
s21, carrying out binarization processing on the gray level image, setting a gray level threshold value, and selecting a lane line and other gray level stable areas according to the gray level threshold value;
s22, performing edge detection on the binary image, searching for a contour, and removing white objects except the lane line according to the contour characteristics of the lane line;
s23, rotating the gray level image, and calculating the projection of a transverse histogram of the image rotated each time;
and S24, recording the variance of each histogram overall sample, recording the angle, the image and the histogram corresponding to the maximum variance, and acquiring the deflection angle of the vehicle at the current moment.
Since the gray image is subjected to the inverse perspective transformation, when the object of interest is segmented, the gray image is subjected to MSER (maximally stable extremal region) searching processing, which is to perform binarization processing on the gray image and select a region with stable regional gray. After the processing of the step, lane lines and other areas with stable gray levels in the image are processed to be white, and other areas are processed to be black;
specifically, as shown in fig. 5, the step S21 includes the following sub-steps:
s211, searching the minimum pixel point in the image matrix, using the minimum pixel point as a node, searching similar pixel points with the pixel value similar to that of the minimum pixel point, and grouping the similar pixel points into a set,
s212, continuously searching other similar pixel points by taking the similar pixel points as the nodes again and adding the similar pixel points into the same set until all the points in the image are traversed;
s213, classifying the image into a plurality of areas according to the set, setting a gray threshold, and filling the lane lines and other stable gray areas meeting the gray threshold into white and the other areas into black.
However, in the result image, there is still a small non-region of interest divided into regions of interest, i.e. white is filled, so it is necessary to perform canny edge detection on the image, find out the outline of the region of interest again, filter the area of the outline and the area of the aspect ratio within the set threshold range, i.e. black is filled, and obtain the binary map containing only the main information of the lane road surface as shown in fig. 6. After the processing of the step, the lane lines and a small amount of interferents are basically left on the image.
Then, performing rotation operation within a specified threshold value on the image, calculating horizontal histogram projection on the image rotated each time, converting image information into data statistical information for better quantization processing, recording the peak value of each histogram, and estimating the vertical coordinate position (peak vertical coordinate position) of each lane line in the image, thereby obtaining the correct image with the lane lines parallel to the X axis; in particular, the prescribed threshold range is ± 15 °, i.e. from 0 ° through to +15 ° and-15 °.
Meanwhile, the variance of the overall sample of each histogram is traversed, and the deflection angle corresponding to the maximum variance, the image after rotation correction and the histogram of the image are recorded, so that the image corrected to be parallel to the lane line and the X axis of the image and the histogram of the image can be collected for subsequent processing, and the corrected angle, namely the deflection angle of the vehicle at the current moment, can be obtained.
As shown in fig. 7, the step S3 includes the following sub-steps:
s31, obtaining road width according to pixel points in the aerial view, and estimating the number of lanes according to the road width;
s32, estimating lane line numbers according to the wave crests in the histogram, and selecting a combination which accords with the prior lane number from the estimated lane line numbers;
and S33, verifying the sample data variance of the histogram according to the lane number of the prior lane number combination, and selecting the estimation result with the minimum sample data variance as the number of the lateral lane lines.
Specifically, the actual road width is calculated according to the pixel points in the overhead view obtained by inverse perspective transformation, the actual number of the lanes which may exist is estimated, the detected number of the lanes is estimated according to the histogram obtained in the step S24, the actual number of the lanes is selected from the detected number of the lanes, and all combinations are processed, for example, 5 lanes are detected, 3 lanes are selected in advance, and the number of the combinations is 5
Figure BDA0000907403450000061
Selecting a lane line combination meeting the threshold requirement;
and verifying the sample data variance of the histogram according to the number of lane lines of the lane line combination obtained by screening, selecting the estimation result with the minimum sample data variance as the number of the side lane lines, and marking the position of the side lane line, as shown in fig. 8.
As shown in fig. 9, the present invention further provides an auxiliary system for fast positioning and detecting a vehicle deflection angle of two lanes, where the auxiliary system for fast positioning and detecting a vehicle deflection angle of two lanes includes:
the image processing module is used for acquiring scene images on two sides of a road in vehicle driving and preprocessing the acquired images to obtain optimized gray level images;
the deflection angle acquisition module is used for extracting a lane line from the gray level image through binarization processing, correcting an image with the lane line parallel to a road and a histogram thereof based on transverse histogram projection of the image, and acquiring a deflection angle of the vehicle at the current moment;
and the lane line estimation module is used for estimating the number of lane lines according to the histogram and verifying the histogram sample data variance of the number of lane lines, wherein the minimum estimation result of the histogram sample data variance is the number of side lane lines.
As shown in fig. 10, the image processing module includes the following sub-modules:
the color balance submodule is used for carrying out color balance on the acquired image;
the perspective transformation submodule is used for carrying out inverse perspective transformation processing on the image subjected to the color balance processing so as to change the image from a perspective view to an aerial view;
the gray processing submodule is used for carrying out gray color processing on the image subjected to the inverse perspective transformation processing;
and the optimization processing submodule is used for carrying out noise suppression and shadow weakening processing on the image after the color processing.
As shown in fig. 11, the deflection angle acquisition module includes the following sub-modules:
the binarization submodule is used for carrying out binarization processing on the gray level image, setting a gray level threshold value, and selecting a lane line and other gray level stable areas according to the gray level threshold value;
the edge detection submodule is used for carrying out edge detection on the binary image, searching for a contour, and removing white objects except the lane line according to the contour characteristics of the lane line;
the image rotation submodule is used for rotating the gray level image and calculating the projection of a transverse histogram of the image rotated each time;
and the image correction sub-module is used for recording the variance of the overall sample of each histogram, recording the angle, the image and the histogram corresponding to the maximum variance, collecting and correcting the collected image of the non-parallel lane line and the vehicle caused by the rotation of the vehicle, and acquiring the deflection angle of the vehicle at the current moment.
As shown in fig. 12, the binarization submodule includes the following functional units:
the set establishing unit is used for searching the minimum pixel point in the image matrix, searching similar pixel points which are close to the pixel value of the minimum pixel point by taking the minimum pixel point as a node, and grouping the similar pixel points into a set;
the set dividing unit is used for continuously searching other similar pixel points by taking the similar pixel points as nodes again and adding the similar pixel points into the same set until all the points in the image are traversed;
and the area dividing unit is used for classifying the image into a plurality of areas according to the set, setting a gray threshold, filling the lane lines meeting the gray threshold and other stable gray areas into white, and filling other areas into black.
As shown in fig. 13, the lane line estimation module includes the following sub-modules:
the road width estimation submodule is used for obtaining the actual road width according to the pixel points in the overhead view and estimating the actual number of the lanes according to the actual road width;
the combination selection submodule is used for estimating lane line numbers according to the wave crests in the histogram and selecting a combination which accords with the actual lane number from the estimated lane line numbers;
and the variance estimation submodule is used for verifying the sample data variance of the histogram according to the lane line number of the actual lane number combination and selecting the estimation result with the minimum sample data variance as the side lane line number.
The auxiliary system for rapidly positioning and detecting the deflection angle of the vehicle on the two sides of the lane further comprises an input auxiliary module and an output auxiliary module, wherein as shown in fig. 14, the input auxiliary module comprises:
the positioning equipment is used for obtaining the current position information of the lane by utilizing the data obtained by the GPS positioning and inertial navigation equipment;
the high-precision map module is used for acquiring a high-precision map which is acquired in advance from the cloud server;
an image acquisition device: the cameras installed on the rearview mirrors on the two sides of the intelligent vehicle transmit video streams on the two sides of the vehicle in real time.
The output auxiliary module comprises a decision module, and the decision module can judge whether to alarm according to whether the current vehicle deflection angle, the vehicle speed and the distance from the lane line all accord with a certain empirical threshold value.
The invention provides an auxiliary method and system for rapidly positioning lanes on two sides and detecting a vehicle deflection angle, wherein an optimized gray level image is obtained through preprocessing, binarization processing is carried out on the gray level image to extract lane lines, images with the lane lines parallel to a road and histograms of the images are corrected based on transverse histogram projection of the images, and the deflection angle of the vehicle at the current moment is obtained; and estimating the number of lane lines through a histogram, and verifying the histogram sample data variance of the number of lane lines, wherein the minimum estimation result of the histogram sample data variance is the number of side lane lines.
The auxiliary method and the system for rapidly positioning the lanes on the two sides and detecting the vehicle deflection angle have the advantages of sufficient algorithm theoretical basis and high feasibility and stability; the system has complete input and output interfaces and can well complete the task of auxiliary positioning. Compared with the traditional method for assisting in positioning the lanes based on vision, the lane line positioning obtained by the auxiliary method and the system for rapidly positioning the lanes on the two sides and detecting the deflection angle of the vehicle is not influenced by most external environments, and the algorithm robustness is high; the lane line positioning speed is high, and the real-time requirement of vehicle-mounted equipment is met; when the lane line is positioned, the included angle between the intelligent vehicle and the lane line is obtained, and a foundation is laid for early warning of subsequent lane departure. The system can complete tasks by using a common camera without expensive radar equipment or image acquisition equipment, has low cost and is suitable for wide application.
The above apparatus embodiments and method embodiments are in one-to-one correspondence, and reference may be made to the method embodiments for a brief point of the apparatus embodiments.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in random access memory, read only memory, electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable magnetic disk, a CD-ROM, or any other form of storage medium known in the art.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (6)

1. An auxiliary method for rapidly positioning two lanes on two sides and detecting a vehicle deflection angle is characterized in that the auxiliary method for rapidly positioning the two lanes on two sides and detecting the vehicle deflection angle comprises the following steps:
s1, acquiring scene images of two sides of a road in vehicle driving, preprocessing the acquired images to obtain an optimized gray level image, wherein the step S1 comprises the following sub-steps:
s11, carrying out color balance on the acquired image;
s12, performing inverse perspective transformation processing on the image after the color balance processing to change the image from a perspective view to an aerial view;
s13, carrying out gray-scale color processing on the image after the inverse perspective transformation processing;
s14, carrying out noise suppression and shadow weakening on the image after color processing;
s2, extracting lane lines from the gray level image through binarization processing, and correcting the image and the histogram of the lane lines parallel to the road based on the horizontal histogram projection of the image to obtain the deflection angle of the vehicle at the current moment;
and S3, estimating the lane line number according to the histogram, and verifying the histogram sample data variance of the lane line number, wherein the minimum estimation result of the histogram sample data variance is the side lane line number.
2. The auxiliary method for rapidly positioning two side lanes and detecting the vehicle deflection angle according to claim 1, wherein the step S2 comprises the following sub-steps:
s21, carrying out binarization processing on the gray level image, setting a gray level threshold value, and selecting a lane line and other gray level stable areas according to the gray level threshold value;
s22, performing edge detection on the binary image, searching for a contour, and removing white objects except the lane line according to the contour characteristics of the lane line;
s23, rotating the gray level image, and calculating the projection of a transverse histogram of the image rotated each time;
and S24, recording the variance of each histogram overall sample, recording the angle, the image and the histogram corresponding to the maximum variance, and acquiring the deflection angle of the vehicle at the current moment.
3. The auxiliary method for rapidly positioning two side lanes and detecting the vehicle deflection angle according to claim 2, wherein the step S3 comprises the following sub-steps:
s31, obtaining the actual road width according to the pixel points in the aerial view, and estimating the actual lane number according to the actual road width;
s32, estimating lane lines according to the peaks in the histogram, and selecting a combination which accords with the actual lane number from the estimated lane lines;
and S33, verifying the variance of the histogram sample data according to the lane number of the actual lane number combination, and selecting the estimation result with the minimum variance of the sample data as the number of the lateral lane lines.
4. The utility model provides an auxiliary system of both sides lane quick location and detection vehicle deflection angle which characterized in that, the auxiliary system of both sides lane quick location and detection vehicle deflection angle includes:
the image processing module is used for acquiring scene images on two sides of a road in vehicle driving and preprocessing the acquired images to obtain optimized gray level images, and comprises the following sub-modules:
the color balance submodule is used for carrying out color balance on the acquired image;
the perspective transformation submodule is used for carrying out inverse perspective transformation processing on the image subjected to the color balance processing so as to change the image from a perspective view to an aerial view;
the gray processing submodule is used for carrying out gray color processing on the image subjected to the inverse perspective transformation processing;
the optimization processing submodule is used for carrying out noise suppression and shadow weakening processing on the image after color processing;
the deflection angle acquisition module is used for extracting a lane line from the gray level image through binarization processing, correcting an image with the lane line parallel to a road and a histogram thereof based on transverse histogram projection of the image, and acquiring a deflection angle of the vehicle at the current moment;
and the lane line estimation module is used for estimating the number of lane lines according to the histogram and verifying the histogram sample data variance of the number of lane lines, wherein the minimum estimation result of the histogram sample data variance is the number of side lane lines.
5. The auxiliary system for rapidly positioning two lanes and detecting the yaw angle of a vehicle according to claim 4, wherein the yaw angle obtaining module comprises the following sub-modules:
the binarization submodule is used for carrying out binarization processing on the gray level image, setting a gray level threshold value, and selecting a lane line and other gray level stable areas according to the gray level threshold value;
the edge detection submodule is used for carrying out edge detection on the binary image, searching for a contour, and removing white objects except the lane line according to the contour characteristics of the lane line;
the image rotation submodule is used for rotating the gray level image and calculating the projection of a transverse histogram of the image rotated each time;
and the image correction sub-module is used for recording the variance of the overall sample of each histogram, recording the angle, the image and the histogram corresponding to the maximum variance, collecting and correcting the collected image of the non-parallel lane line and the non-parallel road caused by the rotation of the vehicle, and acquiring the deflection angle of the vehicle at the current moment.
6. The auxiliary system for rapidly positioning two lanes and detecting the deflection angle of a vehicle according to claim 5, wherein the lane line estimation module comprises the following sub-modules:
the road width estimation submodule is used for obtaining the actual road width according to the pixel points in the overhead view and estimating the actual number of the lanes according to the actual road width;
the combination selection submodule is used for estimating lane line numbers according to the wave crests in the histogram and selecting a combination which accords with the actual lane number from the estimated lane line numbers;
and the variance estimation submodule is used for verifying the sample data variance of the histogram according to the lane line number of the actual lane number combination and selecting the estimation result with the minimum sample data variance as the side lane line number.
CN201610030384.0A 2016-01-15 2016-01-15 Auxiliary method and system for rapidly positioning lanes on two sides and detecting vehicle deflection angle Active CN105718872B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610030384.0A CN105718872B (en) 2016-01-15 2016-01-15 Auxiliary method and system for rapidly positioning lanes on two sides and detecting vehicle deflection angle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610030384.0A CN105718872B (en) 2016-01-15 2016-01-15 Auxiliary method and system for rapidly positioning lanes on two sides and detecting vehicle deflection angle

Publications (2)

Publication Number Publication Date
CN105718872A CN105718872A (en) 2016-06-29
CN105718872B true CN105718872B (en) 2020-02-04

Family

ID=56147124

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610030384.0A Active CN105718872B (en) 2016-01-15 2016-01-15 Auxiliary method and system for rapidly positioning lanes on two sides and detecting vehicle deflection angle

Country Status (1)

Country Link
CN (1) CN105718872B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107193862A (en) * 2017-04-01 2017-09-22 国家计算机网络与信息安全管理中心 A kind of variance optimization histogram construction method and device based on Spark Streaming
WO2018232681A1 (en) * 2017-06-22 2018-12-27 Baidu.Com Times Technology (Beijing) Co., Ltd. Traffic prediction based on map images for autonomous driving
CN107818541B (en) * 2017-10-24 2021-09-17 智车优行科技(北京)有限公司 Overlook image transformation method and device and automobile
CN108528336B (en) * 2018-04-18 2021-05-18 福州大学 Vehicle line pressing early warning system
CN110178167B (en) * 2018-06-27 2022-06-21 潍坊学院 Intersection violation video identification method based on cooperative relay of cameras
CN109344704B (en) * 2018-08-24 2021-09-14 南京邮电大学 Vehicle lane change behavior detection method based on included angle between driving direction and lane line
CN109583418B (en) * 2018-12-13 2021-03-12 武汉光庭信息技术股份有限公司 Lane line deviation self-correction method and device based on parallel relation
CN111382591B (en) * 2018-12-27 2023-09-29 海信集团有限公司 Binocular camera ranging correction method and vehicle-mounted equipment
CN110008921B (en) * 2019-04-12 2021-12-28 北京百度网讯科技有限公司 Road boundary generation method and device, electronic equipment and storage medium
CN110097025B (en) * 2019-05-13 2023-08-04 奇瑞汽车股份有限公司 Lane line detection method, device and storage medium
CN111537954A (en) * 2020-04-20 2020-08-14 孙剑 Real-time high-dynamic fusion positioning method and device
CN111797766B (en) * 2020-07-06 2022-01-11 三一专用汽车有限责任公司 Identification method, identification device, computer-readable storage medium, and vehicle
CN112528776B (en) * 2020-11-27 2024-04-09 京东科技控股股份有限公司 Text line correction method and device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101032405A (en) * 2007-03-21 2007-09-12 汤一平 Safe driving auxiliary device based on omnidirectional computer vision

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9930323B2 (en) * 2014-04-23 2018-03-27 GM Global Technology Operations LLC Method of misalignment correction and diagnostic function for lane sensing sensor

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101032405A (en) * 2007-03-21 2007-09-12 汤一平 Safe driving auxiliary device based on omnidirectional computer vision

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于全景成像的航道检测研究;薛峰;《中国优秀硕士学位论文全文数据库信息科技辑》;20090715(第07期);摘要,第33-46页 *
基于放射性投影直方图及角点探测的车辆识别与测距研究;凌军;《中国优秀硕士学位论文全文数据库信息科技辑》;20110415(第04期);摘要,第31-48页 *

Also Published As

Publication number Publication date
CN105718872A (en) 2016-06-29

Similar Documents

Publication Publication Date Title
CN105718872B (en) Auxiliary method and system for rapidly positioning lanes on two sides and detecting vehicle deflection angle
EP3171292B1 (en) Driving lane data processing method, device, storage medium and apparatus
Bilal et al. Real-time lane detection and tracking for advanced driver assistance systems
CN109460709B (en) RTG visual barrier detection method based on RGB and D information fusion
CN102682292B (en) Method based on monocular vision for detecting and roughly positioning edge of road
CN111179152A (en) Road sign identification method and device, medium and terminal
CN108198417B (en) A kind of road cruising inspection system based on unmanned plane
EP2813973B1 (en) Method and system for processing video image
CN108961276B (en) Distribution line inspection data automatic acquisition method and system based on visual servo
CN111738033B (en) Vehicle driving information determination method and device based on plane segmentation and vehicle-mounted terminal
CN112990087A (en) Lane line detection method, device, equipment and readable storage medium
CN111127520A (en) Vehicle tracking method and system based on video analysis
FAN et al. Robust lane detection and tracking based on machine vision
CN117197019A (en) Vehicle three-dimensional point cloud image fusion method and system
CN114724119B (en) Lane line extraction method, lane line detection device, and storage medium
CN116703979A (en) Target tracking method, device, terminal and storage medium
Danilescu et al. Road anomalies detection using basic morphological algorithms
CN116152758A (en) Intelligent real-time accident detection and vehicle tracking method
CN116030430A (en) Rail identification method, device, equipment and storage medium
CN114627395A (en) Multi-rotor unmanned aerial vehicle angle analysis method, system and terminal based on nested targets
CN111626180A (en) Lane line detection method and device based on polarization imaging
CN117557616B (en) Method, device and equipment for determining pitch angle and estimating depth of monocular camera
Huo et al. The license plate recognition system based on improved algorithm
CN115619856B (en) Lane positioning method based on cooperative vehicle and road sensing
EP4383199A1 (en) Method of calibrating extrinsic video camera parameters

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant