CN108572650B - Adaptive headlamp steering control algorithm based on lane line detection - Google Patents

Adaptive headlamp steering control algorithm based on lane line detection Download PDF

Info

Publication number
CN108572650B
CN108572650B CN201810430833.XA CN201810430833A CN108572650B CN 108572650 B CN108572650 B CN 108572650B CN 201810430833 A CN201810430833 A CN 201810430833A CN 108572650 B CN108572650 B CN 108572650B
Authority
CN
China
Prior art keywords
curve
point
lane line
image
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810430833.XA
Other languages
Chinese (zh)
Other versions
CN108572650A (en
Inventor
张小瑞
徐子茜
孙伟
宋爱国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN201810430833.XA priority Critical patent/CN108572650B/en
Publication of CN108572650A publication Critical patent/CN108572650A/en
Application granted granted Critical
Publication of CN108572650B publication Critical patent/CN108572650B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Lighting Device Outwards From Vehicle And Optical Signal (AREA)

Abstract

The invention discloses a self-adaptive headlamp steering control algorithm based on lane line detection, which comprises the following steps of: (1) image preprocessing, namely determining an interested area according to the structural characteristics of a picture, dividing the interested area, enhancing the contrast of different areas of the picture through linear gray scale conversion, and then performing binarization processing on the image by using an improved Dajin algorithm; (2) detecting the lane line by adopting an improved Hough transformation algorithm; (3) curve fitting and curvature radius calculation; (4) and establishing a headlamp angle adjustment model and solving according to the parking sight distance, the curve illumination distance and the geometric relation of the curvature radius. The invention eliminates the visual blind area on the inner side of the curve when driving at night by rotating the headlamp, and ensures the safety of the vehicle driving at the curve at night.

Description

Adaptive headlamp steering control algorithm based on lane line detection
Technical Field
The invention relates to a self-adaptive headlamp steering control algorithm, in particular to a self-adaptive headlamp steering control algorithm based on lane line detection.
Background
Along with the rapid increase of the number of vehicles, the number of dead and injured people caused by car accidents is also continuously increased, and how to reduce the number of frequent traffic accidents becomes a problem which needs to be solved urgently in China. It is reported that 82% of car accidents occur in poor night lighting conditions; meanwhile, the major accident of driving at night is about 1.5 times of that of the daytime, and 60% of accidents occur in the curve with poor illumination. Because the automobile drives on the bend at night, because can't adjust the head-light optical axis direction, often can appear vision "blind area" in the bend inboard, this kind of vision "blind area" is the automobile in-process because the head-light illumination area is fixed and is produced, and the driver's sight is forbidden copper in the straight line scope that the light beam shines because of inertia in the driving process to can bring huge traffic safety hidden danger.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to overcome the defects of the conventional passive headlamp steering control system and provide a high-reliability adaptive headlamp steering control algorithm based on lane line detection.
The technical scheme is as follows: the invention comprises the following steps:
(1) image preprocessing, including region of interest division, linear gray processing and image binarization based on an improved Dajin algorithm;
(2) detecting the lane line by adopting an improved Hough transformation algorithm;
(3) curve fitting and curvature radius calculation;
(4) and establishing a headlamp angle adjustment model and solving according to the parking sight distance, the curve illumination distance and the geometric relation of the curvature radius.
The region of interest dividing method in the step (1) comprises the following steps: dividing the acquired image into three transverse areas from top to bottom: the sky area, the far vision field area and the near vision field area, wherein the sky area height is half of the image height, the far vision field area and the near vision field area height are quarter of the image height, and the far vision field area and the near vision field area at the bottom of the image are set as the region of interest.
And (3) the lane line detection in the step (2) is carried out in a near vision region in the region of interest.
And (4) performing curve fitting and curvature radius calculation in the step (3) in a far field area in the region of interest.
The calculation process of the step (3) is as follows: firstly, selecting curve feature points by using a scanning iterative algorithm; filling the curve lane lines by using a CatMull-Rom spline curve according to the characteristic points; and finally, calculating the curvature radius of the curve based on the imaging rule of the camera.
Has the advantages that: the invention eliminates the visual blind area on the inner side of the curve when driving at night by rotating the headlamp, and ensures the safety of the vehicle driving at the curve at night.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic view of region of interest partitioning according to the present invention;
fig. 3 is a schematic diagram of the constrained range of the Hough transform search region of the present invention.
Detailed Description
The invention will be further explained with reference to the drawings.
As shown in fig. 1, the present invention comprises the steps of:
(1) image preprocessing: the calculation amount of a subsequent algorithm is reduced by determining and dividing the region of interest; the contrast of each part of the picture is enhanced through linear gray processing, and the accuracy of a subsequent detection algorithm is improved; and (5) completing the binarization of the image by using an improved Dajin algorithm.
Determination of the region of interest:
as shown in fig. 2, the acquired image is divided into three lateral regions from top to bottom: the sky area, far vision district and near vision district, wherein, sky district height is half of image height, and far vision district and near vision district height are the quarter of image height. And setting a far vision field area and a near vision field area at the bottom of the image as regions of interest. The following steps are all performed in the region of interest.
Linear gray scale transformation:
the purpose of linear gray scale transformation is to make the bright part of the image brighter, the dark part darker and the contrast ratio increased, so that the gray scale difference between the lane line class and the background class is increased, and the loss of the lane line characteristic when the subsequent binarization is performed by using the Otsu method is reduced.
Let the original image be I (x, y), and the gray value of the pixel point of the original image be [ Imin,Imax]Within the interval range, and the range of gray-scale values expected to be output is [ I'min,I′max](wherein, I'min<Imin,I′max>Imax) And then:
Figure BDA0001653373610000021
here, I' (x, y) is an image subjected to linear gradation change.
Image binarization based on an improved Dajin algorithm:
let n be the total number of pixels in the image, niThe number of pixels with the gray value i is, the probability of the pixel with the gray value i appearing is:
Figure BDA0001653373610000031
let the cumulative probability p (t) and the average value u (t) of the lane line class L and the background class B be:
Figure BDA0001653373610000032
Figure BDA0001653373610000033
Figure BDA0001653373610000034
Figure BDA0001653373610000035
wherein t is a threshold, that is, pixels with a gray value less than or equal to t are classified as a lane line class, pixels with a gray value greater than t are classified as a background class, and L is the total number of gray values different from each other.
Therefore, the inter-class variance of the two classes, lane line class and background class, is:
δ(t)=ωPL(t)(uL(t))2+PB(t)(uB(t))2 (4)
unlike the traditional Otsu method, in view of the fact that the area of the lane line class is much smaller than that of the background class, in the above formula of the inter-class variance, the background variance has a greater weight in the inter-class variance to be closer to the actual situation, and the target variance (i.e., the lane line class variance) needs to be multiplied by a coefficient ω between 0 and 1. In order to make the method more adaptive, let
ω=PL(t) (5)
The threshold T in the algorithm is:
T=arg maxδ(t)=arg max(PL(t)PL(t)(uL(t))2+PB(t)(uB(t))2) (6)
the binarized image I "of the modified madzu algorithm can be represented as:
Figure BDA0001653373610000036
(2) and detecting the lane line by adopting an improved Hough transformation algorithm. The step is carried out in a near vision region in the region of interest, and the intersection point of the identified lane line and the upper boundary of the near vision region is the initial characteristic point in the subsequent curve detection scanning iterative algorithm.
As shown in fig. 3, a modified Hough transform algorithm is adopted to detect lane lines, and in actual detection, a lane in the t-th frame is adjacent to a lane in the (t +1) -th frame. Therefore, according to the polar diameter and polar angle detected in the t-th frame, the search range of the polar diameter and polar angle in the (t +1) -th frame is:
Figure BDA0001653373610000041
Figure BDA0001653373610000042
where both alpha and epsilon are thresholds. In the present invention, α is 15 and ∈ is 10, which improves the detection efficiency.
The specific detection method comprises the following steps:
1) determining the search ranges of the polar diameter rho and the polar angle theta according to the lane line constraint area, and respectively establishing a discrete parameter space between the maximum value and the minimum value of the search ranges;
2) establishing an accumulator N (rho, theta) of a two-dimensional array, and assigning an initial value of 0 to each element in the array;
3) performing Hough transformation (pixel point with gray value of 1 after image binarization) on each edge point on the preprocessed image, calculating a corresponding curve of the point on a (rho, theta) coordinate system, and adding 1 to a corresponding accumulator;
4) finding local maxima of an accumulator corresponding to collinear points in an (x, y) coordinate system, the accumulator extractingThe parameter (rho) of the collinear point straight line on the (x, y) coordinate plane is provided00) Will (ρ)00) Substitution of rho0=xcosθ0+ysinθ0And obtaining a linear equation of the lane line.
(3) Curve fitting and curvature radius calculation: all calculations of this step are performed in the far field of view of the region of interest. Firstly, selecting curve characteristic points by using a scanning iterative algorithm, then filling curve lane lines by using a CatMull-Rom spline curve according to the characteristic points, and finally calculating the curvature radius of the curve based on the imaging rule of a camera.
The method for selecting the curve feature points based on scanning iteration comprises the following steps:
1) setting the intersection point of the straight lane line detected in the step (2) and the boundary on the near vision field area as an initial characteristic
Point Px,y
2) Based on the initial characteristic point Px,yThe image is scanned using a window of 3 x 2 size, the coordinates of which are
Figure BDA0001653373610000043
3) The pixel point with the highest gray value in the window is set as a curve feature point, and the point is used as an initial feature point of the next scanning, namely:
Figure BDA0001653373610000051
4) repeating 2) and 3) until the boundary of the far-field area is reached or the P point is coincided with the vanishing point, and finishing the scanning.
Lane line filling based on CatMull-Rom spline curve:
and after the scanning iterative algorithm is completed, obtaining a plurality of isolated curve characteristic points, and performing curve fitting on the characteristic points by using a CatMull-Rom spline curve in order to improve the accuracy of subsequent curve curvature calculation.
The CatMull-Rom spline curve is a piecewise, continuous, smooth curve, each computed separately, continuously derivable from one segment to another. The CatMull-Rom spline equation is as follows:
Figure BDA0001653373610000052
wherein t is [0,1 ]]Coefficient of between, Pt-1、Pt、Pt+1、Pt+2Respectively, the coordinates of the four feature points.
The spline curve obtained by solving and fitting the above formula is PtPoint to Pt+1The curve between the points.
The actual fitting procedure is as follows:
1) let all curve feature points from bottom to top be P1,P2,P3,...PnWherein n represents the number of curve feature points;
2) assigning an initial value of t to be 2;
3) get Pt-1、Pt、Pt+1、Pt+2Respectively taking four characteristic points into a CatMull-Rom spline curve equation to obtain a curve function;
4) t is increased by 1;
5) and repeating 3) and 4) until t is equal to n, exiting the loop, traversing all curve feature points and completing curve fitting.
Calculating the curvature radius of the curve:
and assuming that the gradient of the road surface is small and can be ignored, the Y coordinate of each point on the road surface is equal. According to the imaging rule of the camera, the world coordinates (X, Y, Z) and the image coordinates (X, Y) of any point P in the space satisfy the following conversion relation:
Figure BDA0001653373610000061
wherein H is the vertical height of the optical center of the camera, and f is the focal length of the camera.
And (3) arbitrarily taking four groups of points on the fitted curve lane line, and substituting the points into a formula after the coordinate change is completed, wherein each group of three points are:
Figure BDA0001653373610000062
wherein, (a, b) is the centre of a circle of the curve, and R is the corresponding curvature radius.
Assuming that the radius of curvature obtained for each set of points is
Figure BDA0001653373610000063
R2,R3And R4In order to prevent an abnormal value from existing in the four obtained radius values, and to determine the subsequent curvature radius with a large error, the abnormal value is detected and eliminated by using the following method:
taking the minimum value R of four radius valuesminAnd calculating the average of the remaining three values
Figure BDA0001653373610000064
If it is
Figure 1
The minimum value is identified as an outlier and the outlier is rejected. Similarly, take the maximum R of the four radius valuesmaxAnd calculating the average of the remaining three values
Figure BDA0001653373610000066
If it is
Figure BDA0001653373610000067
Figure BDA0001653373610000068
The maximum value is identified as an outlier and the outlier is rejected.
After eliminating abnormal values in the four groups of curvature radiuses, averaging the curvature radiuses within the error range to obtain the curvature radius of the curve
Figure BDA0001653373610000069
(4) Headlamp angle adjustment model: and establishing a headlamp angle adjustment model and solving according to the parking sight distance, the curve illumination distance and the geometric relation of the curvature radius.
The parking sight distance, namely the shortest driving distance from the driver to the brake parking after finding the front obstacle, is calculated by the following formula:
Figure BDA00016533736100000610
wherein S is a parking sight distance, v is a running speed, t is a driver reaction time, mu is a friction coefficient between a road surface and a tire, S0Is a safe distance.
From the geometrical relationship between the curve illumination distance and the curve radius, the following can be known:
Figure BDA0001653373610000071
wherein the content of the first and second substances,
Figure BDA0001653373610000072
is the horizontal corner of the headlamp. The two formulas are combined to obtain:
Figure BDA0001653373610000073
according to the national angular regulations for headlamp adjustment, the cutoff line curved elbow must not intersect the vehicle center of gravity trajectory at a distance 100 times the corresponding low beam mounting height from the front of the vehicle, i.e.:
Figure BDA0001653373610000074
wherein the content of the first and second substances,
Figure BDA0001653373610000075
i.e. maximum of the head lampAnd adjusting the angle, wherein H is the height of the reference center of the headlamp.

Claims (3)

1. A self-adaptive headlamp steering control algorithm based on lane line detection is characterized by comprising the following steps:
(1) image preprocessing, including region of interest division, linear gray processing and image binarization based on an improved Dajin algorithm, wherein the region of interest division method comprises the following steps: dividing the acquired image into three transverse areas from top to bottom: the system comprises a sky area, a far vision area and a near vision area, wherein the height of the sky area is one half of the height of an image, the heights of the far vision area and the near vision area are both one fourth of the height of the image, and the far vision area and the near vision area at the bottom of the image are set as regions of interest;
(2) the method for detecting the lane line by adopting the improved Hough transformation algorithm comprises the following steps:
1) determining the search ranges of the polar diameter rho and the polar angle theta according to the lane line constraint area, and respectively establishing a discrete parameter space between the maximum value and the minimum value of the search ranges;
2) establishing an accumulator N (rho, theta) of a two-dimensional array, and assigning an initial value of 0 to each element in the array;
3) performing Hough transformation (pixel point with gray value of 1 after image binarization) on each edge point on the preprocessed image, calculating a corresponding curve of the point on a (rho, theta) coordinate system, and adding 1 to a corresponding accumulator;
4) local maxima of an accumulator corresponding to a collinear point on the (x, y) coordinate system are found, the accumulator providing a parameter (p) of a collinear point straight line on the (x, y) coordinate plane00) Will (ρ)00) Substitution of rho0=x cosθ0+y sinθ0Obtaining a linear equation of the lane line;
(3) curve fitting and curvature radius calculation:
firstly, a method for selecting curve feature points based on scanning iteration comprises the following steps:
1) determining the intersection point of the linear lane line detected in the step (2) and the upper boundary of the near vision field areaIs an initial feature point Px,y
2) Based on the initial characteristic point Px,yThe image is scanned using a window of 3 x 2 size, the coordinates of which are
Figure FDA0003144771380000011
3) The pixel point with the highest gray value in the window is set as a curve feature point, and the point is used as an initial feature point of the next scanning, namely:
Figure FDA0003144771380000021
4) repeating the step 2) and the step 3) until the boundary of the far vision field is reached, or the point P is coincided with the vanishing point, and finishing the scanning;
secondly, lane line filling based on the CatMull-Rom spline curve:
after the scanning iterative algorithm is completed, obtaining a plurality of isolated curve characteristic points, and performing curve fitting on the characteristic points by using a CatMull-Rom spline curve, wherein the CatMull-Rom spline curve equation is as follows:
Figure FDA0003144771380000022
wherein t is [0,1 ]]Coefficient of between, Pt-1、Pt、Pt+1、Pt+2The coordinates of the four characteristic points are respectively, and a spline curve obtained by solving and fitting the above formula is PtPoint to Pt+1The curve between the points of the graph,
the curve fitting process is as follows:
1) let all curve feature points from bottom to top be P1,P2,P3,...PnWherein n represents the number of curve feature points;
2) assigning an initial value of t to be 2;
3) get Pt-1、Pt、Pt+1、Pt+2Respectively taking four characteristic points into a CatMull-Rom spline curve equation to obtain a curve function;
4) t is increased by 1;
5) repeating the steps 3) and 4) until t is equal to n, exiting the cycle, traversing all curve feature points at the moment, and completing curve fitting;
finally, the curve curvature radius is calculated:
assuming that the gradient of the road surface is small, the Y coordinate of each point on the road surface is equal, and according to the imaging rule of the camera, the world coordinate (X, Y, Z) and the image coordinate (X, Y) of any point P in the space both satisfy the following conversion relation:
Figure FDA0003144771380000023
wherein H is the vertical height of the optical center of the camera, f is the focal length of the camera,
and (3) arbitrarily taking four groups of points on the fitted curve lane line, and substituting the points into a formula after the coordinate change is completed, wherein each group of three points are:
Figure FDA0003144771380000031
wherein (a, b) is the center of the curve, R is the corresponding curvature radius,
assuming that the radius of curvature obtained for each set of points is R1,R2,R3And R4In order to prevent an abnormal value from existing in the four obtained radius values, and to determine the subsequent curvature radius with a large error, the abnormal value is detected and eliminated by using the following method:
taking the minimum value R of four radius valuesminAnd calculating the average of the remaining three values
Figure FDA0003144771380000032
If it is
Figure FDA0003144771380000033
The minimum value is identified as an outlier and the outlier is rejected. Similarly, take the maximum R of the four radius valuesmaxAnd calculating the average of the remaining three values
Figure FDA0003144771380000034
If it is
Figure FDA0003144771380000035
The maximum value is identified as an outlier, and the outlier is rejected,
after eliminating abnormal values in the four groups of curvature radiuses, averaging the curvature radiuses within the error range to obtain the curvature radius of the curve
Figure FDA0003144771380000036
(4) According to the parking sight distance and the geometric relation between the curve illumination distance and the curvature radius, a headlamp angle adjustment model is established and solved, and the parking sight distance is calculated according to the following formula:
Figure FDA0003144771380000037
wherein S is a parking sight distance, v is a running speed, t is a driver reaction time, mu is a friction coefficient between a road surface and a tire, S0In order to be a safe distance away from the vehicle,
from the geometrical relationship between the curve illumination distance and the curve radius, the following can be known:
Figure FDA0003144771380000038
wherein the content of the first and second substances,
Figure FDA0003144771380000039
is the horizontal corner of the headlamp. The two types are combined to obtain:
Figure FDA00031447713800000310
The cut-off line curved elbow must not intersect the vehicle center of gravity locus at a distance 100 times the corresponding low beam mounting height from the front of the vehicle, i.e.:
Figure FDA0003144771380000041
wherein the content of the first and second substances,
Figure FDA0003144771380000042
namely the maximum adjusting angle of the headlamp, and H is the reference center height of the headlamp.
2. The adaptive headlamp steering control algorithm based on lane line detection as claimed in claim 1, wherein the lane line detection in step (2) is performed in a near field region in the region of interest.
3. The adaptive headlamp steering control algorithm based on lane line detection as claimed in claim 1, wherein the curve fitting and the curvature radius calculation in step (3) are both performed in a far field region in the region of interest.
CN201810430833.XA 2018-05-08 2018-05-08 Adaptive headlamp steering control algorithm based on lane line detection Active CN108572650B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810430833.XA CN108572650B (en) 2018-05-08 2018-05-08 Adaptive headlamp steering control algorithm based on lane line detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810430833.XA CN108572650B (en) 2018-05-08 2018-05-08 Adaptive headlamp steering control algorithm based on lane line detection

Publications (2)

Publication Number Publication Date
CN108572650A CN108572650A (en) 2018-09-25
CN108572650B true CN108572650B (en) 2021-08-24

Family

ID=63571967

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810430833.XA Active CN108572650B (en) 2018-05-08 2018-05-08 Adaptive headlamp steering control algorithm based on lane line detection

Country Status (1)

Country Link
CN (1) CN108572650B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117830592A (en) * 2023-12-04 2024-04-05 广州成至智能机器科技有限公司 Unmanned aerial vehicle night illumination method, system, equipment and medium based on image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102490650A (en) * 2011-12-20 2012-06-13 奇瑞汽车股份有限公司 Steering lamp control device for vehicle, automobile and control method
CN107730520A (en) * 2017-09-22 2018-02-23 智车优行科技(北京)有限公司 Method for detecting lane lines and system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103192758B (en) * 2013-04-19 2015-02-11 北京航空航天大学 Front lamp following turning control method based on machine vision
DE102014204614A1 (en) * 2014-03-12 2015-09-17 Automotive Lighting Reutlingen Gmbh Method for providing a headlight for a motor vehicle, and a lighting device for a motor vehicle
US20150294566A1 (en) * 2014-04-15 2015-10-15 Tomorrow's Transportation Today Trip planning and management methods for an intelligent transit system with electronic guided buses

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102490650A (en) * 2011-12-20 2012-06-13 奇瑞汽车股份有限公司 Steering lamp control device for vehicle, automobile and control method
CN107730520A (en) * 2017-09-22 2018-02-23 智车优行科技(北京)有限公司 Method for detecting lane lines and system

Also Published As

Publication number Publication date
CN108572650A (en) 2018-09-25

Similar Documents

Publication Publication Date Title
CN108960183B (en) Curve target identification system and method based on multi-sensor fusion
CN105206109B (en) A kind of vehicle greasy weather identification early warning system and method based on infrared CCD
EP3304886B1 (en) In-vehicle camera system and image processing apparatus
JP5809785B2 (en) Vehicle external recognition device and light distribution control system using the same
US9311711B2 (en) Image processing apparatus and image processing method
US9297641B2 (en) Detection of obstacles at night by analysis of shadows
JP5145585B2 (en) Target detection device
CN109190523B (en) Vehicle detection tracking early warning method based on vision
US20130141520A1 (en) Lane tracking system
US20150186733A1 (en) Three-dimensional object detection device, three-dimensional object detection method
CN103020948A (en) Night image characteristic extraction method in intelligent vehicle-mounted anti-collision pre-warning system
JP6723328B2 (en) Vehicle detection method, night-time vehicle detection method and system based on dynamic light intensity
CN102815259B (en) Regulation method for head lamps, device thereof and driver assistance system
US20040061712A1 (en) Stereoscopic image processing apparatus and the method of processing stereoscopic images
JP2861754B2 (en) Light distribution control device for headlamp
CN102865824B (en) A kind of method and apparatus calculating relative distance between vehicle
CN112406687A (en) 'man-vehicle-road' cooperative programmable matrix headlamp system and method
CN104220301A (en) Method and control device for adapting upper boundary of headlight beam
CN107622494B (en) Night vehicle detection and tracking method facing traffic video
CN109229011A (en) A kind of headlight steering control system and method based on lane detection
CN108572650B (en) Adaptive headlamp steering control algorithm based on lane line detection
Mori et al. Recognition of foggy conditions by in-vehicle camera and millimeter wave radar
JP4969359B2 (en) Moving object recognition device
JPH06270733A (en) Head lamp device for vehicle
CN111414857A (en) Front vehicle detection method based on vision multi-feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 210044 No. 219 Ningliu Road, Jiangbei New District, Nanjing City, Jiangsu Province

Applicant after: Nanjing University of Information Science and Technology

Address before: 211500 Yuting Square, 59 Wangqiao Road, Liuhe District, Nanjing City, Jiangsu Province

Applicant before: Nanjing University of Information Science and Technology

GR01 Patent grant
GR01 Patent grant