CN108572650B - Adaptive headlamp steering control algorithm based on lane line detection - Google Patents
Adaptive headlamp steering control algorithm based on lane line detection Download PDFInfo
- Publication number
- CN108572650B CN108572650B CN201810430833.XA CN201810430833A CN108572650B CN 108572650 B CN108572650 B CN 108572650B CN 201810430833 A CN201810430833 A CN 201810430833A CN 108572650 B CN108572650 B CN 108572650B
- Authority
- CN
- China
- Prior art keywords
- curve
- point
- lane line
- image
- points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 14
- 230000003044 adaptive effect Effects 0.000 title claims description 6
- 238000004364 calculation method Methods 0.000 claims abstract description 10
- 238000005286 illumination Methods 0.000 claims abstract description 8
- 230000009466 transformation Effects 0.000 claims abstract description 8
- 238000007781 pre-processing Methods 0.000 claims abstract description 4
- 238000006243 chemical reaction Methods 0.000 claims abstract description 3
- 238000000034 method Methods 0.000 claims description 16
- 208000001491 myopia Diseases 0.000 claims description 14
- 230000002159 abnormal effect Effects 0.000 claims description 6
- 238000003384 imaging method Methods 0.000 claims description 4
- 229910052697 platinum Inorganic materials 0.000 claims description 4
- 239000000126 substance Substances 0.000 claims description 4
- 230000003287 optical effect Effects 0.000 claims description 3
- 238000012935 Averaging Methods 0.000 claims description 2
- 230000005484 gravity Effects 0.000 claims description 2
- 230000035484 reaction time Effects 0.000 claims description 2
- 238000006467 substitution reaction Methods 0.000 claims description 2
- 230000000007 visual effect Effects 0.000 abstract description 2
- 230000002708 enhancing effect Effects 0.000 abstract 1
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000003760 hair shine Effects 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 238000011895 specific detection Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0253—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0214—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0221—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0223—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Aviation & Aerospace Engineering (AREA)
- Automation & Control Theory (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Electromagnetism (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Lighting Device Outwards From Vehicle And Optical Signal (AREA)
Abstract
The invention discloses a self-adaptive headlamp steering control algorithm based on lane line detection, which comprises the following steps of: (1) image preprocessing, namely determining an interested area according to the structural characteristics of a picture, dividing the interested area, enhancing the contrast of different areas of the picture through linear gray scale conversion, and then performing binarization processing on the image by using an improved Dajin algorithm; (2) detecting the lane line by adopting an improved Hough transformation algorithm; (3) curve fitting and curvature radius calculation; (4) and establishing a headlamp angle adjustment model and solving according to the parking sight distance, the curve illumination distance and the geometric relation of the curvature radius. The invention eliminates the visual blind area on the inner side of the curve when driving at night by rotating the headlamp, and ensures the safety of the vehicle driving at the curve at night.
Description
Technical Field
The invention relates to a self-adaptive headlamp steering control algorithm, in particular to a self-adaptive headlamp steering control algorithm based on lane line detection.
Background
Along with the rapid increase of the number of vehicles, the number of dead and injured people caused by car accidents is also continuously increased, and how to reduce the number of frequent traffic accidents becomes a problem which needs to be solved urgently in China. It is reported that 82% of car accidents occur in poor night lighting conditions; meanwhile, the major accident of driving at night is about 1.5 times of that of the daytime, and 60% of accidents occur in the curve with poor illumination. Because the automobile drives on the bend at night, because can't adjust the head-light optical axis direction, often can appear vision "blind area" in the bend inboard, this kind of vision "blind area" is the automobile in-process because the head-light illumination area is fixed and is produced, and the driver's sight is forbidden copper in the straight line scope that the light beam shines because of inertia in the driving process to can bring huge traffic safety hidden danger.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to overcome the defects of the conventional passive headlamp steering control system and provide a high-reliability adaptive headlamp steering control algorithm based on lane line detection.
The technical scheme is as follows: the invention comprises the following steps:
(1) image preprocessing, including region of interest division, linear gray processing and image binarization based on an improved Dajin algorithm;
(2) detecting the lane line by adopting an improved Hough transformation algorithm;
(3) curve fitting and curvature radius calculation;
(4) and establishing a headlamp angle adjustment model and solving according to the parking sight distance, the curve illumination distance and the geometric relation of the curvature radius.
The region of interest dividing method in the step (1) comprises the following steps: dividing the acquired image into three transverse areas from top to bottom: the sky area, the far vision field area and the near vision field area, wherein the sky area height is half of the image height, the far vision field area and the near vision field area height are quarter of the image height, and the far vision field area and the near vision field area at the bottom of the image are set as the region of interest.
And (3) the lane line detection in the step (2) is carried out in a near vision region in the region of interest.
And (4) performing curve fitting and curvature radius calculation in the step (3) in a far field area in the region of interest.
The calculation process of the step (3) is as follows: firstly, selecting curve feature points by using a scanning iterative algorithm; filling the curve lane lines by using a CatMull-Rom spline curve according to the characteristic points; and finally, calculating the curvature radius of the curve based on the imaging rule of the camera.
Has the advantages that: the invention eliminates the visual blind area on the inner side of the curve when driving at night by rotating the headlamp, and ensures the safety of the vehicle driving at the curve at night.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic view of region of interest partitioning according to the present invention;
fig. 3 is a schematic diagram of the constrained range of the Hough transform search region of the present invention.
Detailed Description
The invention will be further explained with reference to the drawings.
As shown in fig. 1, the present invention comprises the steps of:
(1) image preprocessing: the calculation amount of a subsequent algorithm is reduced by determining and dividing the region of interest; the contrast of each part of the picture is enhanced through linear gray processing, and the accuracy of a subsequent detection algorithm is improved; and (5) completing the binarization of the image by using an improved Dajin algorithm.
Determination of the region of interest:
as shown in fig. 2, the acquired image is divided into three lateral regions from top to bottom: the sky area, far vision district and near vision district, wherein, sky district height is half of image height, and far vision district and near vision district height are the quarter of image height. And setting a far vision field area and a near vision field area at the bottom of the image as regions of interest. The following steps are all performed in the region of interest.
Linear gray scale transformation:
the purpose of linear gray scale transformation is to make the bright part of the image brighter, the dark part darker and the contrast ratio increased, so that the gray scale difference between the lane line class and the background class is increased, and the loss of the lane line characteristic when the subsequent binarization is performed by using the Otsu method is reduced.
Let the original image be I (x, y), and the gray value of the pixel point of the original image be [ Imin,Imax]Within the interval range, and the range of gray-scale values expected to be output is [ I'min,I′max](wherein, I'min<Imin,I′max>Imax) And then:
here, I' (x, y) is an image subjected to linear gradation change.
Image binarization based on an improved Dajin algorithm:
let n be the total number of pixels in the image, niThe number of pixels with the gray value i is, the probability of the pixel with the gray value i appearing is:
let the cumulative probability p (t) and the average value u (t) of the lane line class L and the background class B be:
wherein t is a threshold, that is, pixels with a gray value less than or equal to t are classified as a lane line class, pixels with a gray value greater than t are classified as a background class, and L is the total number of gray values different from each other.
Therefore, the inter-class variance of the two classes, lane line class and background class, is:
δ(t)=ωPL(t)(uL(t))2+PB(t)(uB(t))2 (4)
unlike the traditional Otsu method, in view of the fact that the area of the lane line class is much smaller than that of the background class, in the above formula of the inter-class variance, the background variance has a greater weight in the inter-class variance to be closer to the actual situation, and the target variance (i.e., the lane line class variance) needs to be multiplied by a coefficient ω between 0 and 1. In order to make the method more adaptive, let
ω=PL(t) (5)
The threshold T in the algorithm is:
T=arg maxδ(t)=arg max(PL(t)PL(t)(uL(t))2+PB(t)(uB(t))2) (6)
the binarized image I "of the modified madzu algorithm can be represented as:
(2) and detecting the lane line by adopting an improved Hough transformation algorithm. The step is carried out in a near vision region in the region of interest, and the intersection point of the identified lane line and the upper boundary of the near vision region is the initial characteristic point in the subsequent curve detection scanning iterative algorithm.
As shown in fig. 3, a modified Hough transform algorithm is adopted to detect lane lines, and in actual detection, a lane in the t-th frame is adjacent to a lane in the (t +1) -th frame. Therefore, according to the polar diameter and polar angle detected in the t-th frame, the search range of the polar diameter and polar angle in the (t +1) -th frame is: where both alpha and epsilon are thresholds. In the present invention, α is 15 and ∈ is 10, which improves the detection efficiency.
The specific detection method comprises the following steps:
1) determining the search ranges of the polar diameter rho and the polar angle theta according to the lane line constraint area, and respectively establishing a discrete parameter space between the maximum value and the minimum value of the search ranges;
2) establishing an accumulator N (rho, theta) of a two-dimensional array, and assigning an initial value of 0 to each element in the array;
3) performing Hough transformation (pixel point with gray value of 1 after image binarization) on each edge point on the preprocessed image, calculating a corresponding curve of the point on a (rho, theta) coordinate system, and adding 1 to a corresponding accumulator;
4) finding local maxima of an accumulator corresponding to collinear points in an (x, y) coordinate system, the accumulator extractingThe parameter (rho) of the collinear point straight line on the (x, y) coordinate plane is provided0,θ0) Will (ρ)0,θ0) Substitution of rho0=xcosθ0+ysinθ0And obtaining a linear equation of the lane line.
(3) Curve fitting and curvature radius calculation: all calculations of this step are performed in the far field of view of the region of interest. Firstly, selecting curve characteristic points by using a scanning iterative algorithm, then filling curve lane lines by using a CatMull-Rom spline curve according to the characteristic points, and finally calculating the curvature radius of the curve based on the imaging rule of a camera.
The method for selecting the curve feature points based on scanning iteration comprises the following steps:
1) setting the intersection point of the straight lane line detected in the step (2) and the boundary on the near vision field area as an initial characteristic
Point Px,y;
2) Based on the initial characteristic point Px,yThe image is scanned using a window of 3 x 2 size, the coordinates of which are
3) The pixel point with the highest gray value in the window is set as a curve feature point, and the point is used as an initial feature point of the next scanning, namely:
4) repeating 2) and 3) until the boundary of the far-field area is reached or the P point is coincided with the vanishing point, and finishing the scanning.
Lane line filling based on CatMull-Rom spline curve:
and after the scanning iterative algorithm is completed, obtaining a plurality of isolated curve characteristic points, and performing curve fitting on the characteristic points by using a CatMull-Rom spline curve in order to improve the accuracy of subsequent curve curvature calculation.
The CatMull-Rom spline curve is a piecewise, continuous, smooth curve, each computed separately, continuously derivable from one segment to another. The CatMull-Rom spline equation is as follows:
wherein t is [0,1 ]]Coefficient of between, Pt-1、Pt、Pt+1、Pt+2Respectively, the coordinates of the four feature points.
The spline curve obtained by solving and fitting the above formula is PtPoint to Pt+1The curve between the points.
The actual fitting procedure is as follows:
1) let all curve feature points from bottom to top be P1,P2,P3,...PnWherein n represents the number of curve feature points;
2) assigning an initial value of t to be 2;
3) get Pt-1、Pt、Pt+1、Pt+2Respectively taking four characteristic points into a CatMull-Rom spline curve equation to obtain a curve function;
4) t is increased by 1;
5) and repeating 3) and 4) until t is equal to n, exiting the loop, traversing all curve feature points and completing curve fitting.
Calculating the curvature radius of the curve:
and assuming that the gradient of the road surface is small and can be ignored, the Y coordinate of each point on the road surface is equal. According to the imaging rule of the camera, the world coordinates (X, Y, Z) and the image coordinates (X, Y) of any point P in the space satisfy the following conversion relation:
wherein H is the vertical height of the optical center of the camera, and f is the focal length of the camera.
And (3) arbitrarily taking four groups of points on the fitted curve lane line, and substituting the points into a formula after the coordinate change is completed, wherein each group of three points are:
wherein, (a, b) is the centre of a circle of the curve, and R is the corresponding curvature radius.
Assuming that the radius of curvature obtained for each set of points isR2,R3And R4In order to prevent an abnormal value from existing in the four obtained radius values, and to determine the subsequent curvature radius with a large error, the abnormal value is detected and eliminated by using the following method:
taking the minimum value R of four radius valuesminAnd calculating the average of the remaining three valuesIf it isThe minimum value is identified as an outlier and the outlier is rejected. Similarly, take the maximum R of the four radius valuesmaxAnd calculating the average of the remaining three valuesIf it is
The maximum value is identified as an outlier and the outlier is rejected.
After eliminating abnormal values in the four groups of curvature radiuses, averaging the curvature radiuses within the error range to obtain the curvature radius of the curve
(4) Headlamp angle adjustment model: and establishing a headlamp angle adjustment model and solving according to the parking sight distance, the curve illumination distance and the geometric relation of the curvature radius.
The parking sight distance, namely the shortest driving distance from the driver to the brake parking after finding the front obstacle, is calculated by the following formula:
wherein S is a parking sight distance, v is a running speed, t is a driver reaction time, mu is a friction coefficient between a road surface and a tire, S0Is a safe distance.
From the geometrical relationship between the curve illumination distance and the curve radius, the following can be known:
wherein the content of the first and second substances,is the horizontal corner of the headlamp. The two formulas are combined to obtain:
according to the national angular regulations for headlamp adjustment, the cutoff line curved elbow must not intersect the vehicle center of gravity trajectory at a distance 100 times the corresponding low beam mounting height from the front of the vehicle, i.e.:
Claims (3)
1. A self-adaptive headlamp steering control algorithm based on lane line detection is characterized by comprising the following steps:
(1) image preprocessing, including region of interest division, linear gray processing and image binarization based on an improved Dajin algorithm, wherein the region of interest division method comprises the following steps: dividing the acquired image into three transverse areas from top to bottom: the system comprises a sky area, a far vision area and a near vision area, wherein the height of the sky area is one half of the height of an image, the heights of the far vision area and the near vision area are both one fourth of the height of the image, and the far vision area and the near vision area at the bottom of the image are set as regions of interest;
(2) the method for detecting the lane line by adopting the improved Hough transformation algorithm comprises the following steps:
1) determining the search ranges of the polar diameter rho and the polar angle theta according to the lane line constraint area, and respectively establishing a discrete parameter space between the maximum value and the minimum value of the search ranges;
2) establishing an accumulator N (rho, theta) of a two-dimensional array, and assigning an initial value of 0 to each element in the array;
3) performing Hough transformation (pixel point with gray value of 1 after image binarization) on each edge point on the preprocessed image, calculating a corresponding curve of the point on a (rho, theta) coordinate system, and adding 1 to a corresponding accumulator;
4) local maxima of an accumulator corresponding to a collinear point on the (x, y) coordinate system are found, the accumulator providing a parameter (p) of a collinear point straight line on the (x, y) coordinate plane0,θ0) Will (ρ)0,θ0) Substitution of rho0=x cosθ0+y sinθ0Obtaining a linear equation of the lane line;
(3) curve fitting and curvature radius calculation:
firstly, a method for selecting curve feature points based on scanning iteration comprises the following steps:
1) determining the intersection point of the linear lane line detected in the step (2) and the upper boundary of the near vision field areaIs an initial feature point Px,y;
2) Based on the initial characteristic point Px,yThe image is scanned using a window of 3 x 2 size, the coordinates of which are
3) The pixel point with the highest gray value in the window is set as a curve feature point, and the point is used as an initial feature point of the next scanning, namely:
4) repeating the step 2) and the step 3) until the boundary of the far vision field is reached, or the point P is coincided with the vanishing point, and finishing the scanning;
secondly, lane line filling based on the CatMull-Rom spline curve:
after the scanning iterative algorithm is completed, obtaining a plurality of isolated curve characteristic points, and performing curve fitting on the characteristic points by using a CatMull-Rom spline curve, wherein the CatMull-Rom spline curve equation is as follows:
wherein t is [0,1 ]]Coefficient of between, Pt-1、Pt、Pt+1、Pt+2The coordinates of the four characteristic points are respectively, and a spline curve obtained by solving and fitting the above formula is PtPoint to Pt+1The curve between the points of the graph,
the curve fitting process is as follows:
1) let all curve feature points from bottom to top be P1,P2,P3,...PnWherein n represents the number of curve feature points;
2) assigning an initial value of t to be 2;
3) get Pt-1、Pt、Pt+1、Pt+2Respectively taking four characteristic points into a CatMull-Rom spline curve equation to obtain a curve function;
4) t is increased by 1;
5) repeating the steps 3) and 4) until t is equal to n, exiting the cycle, traversing all curve feature points at the moment, and completing curve fitting;
finally, the curve curvature radius is calculated:
assuming that the gradient of the road surface is small, the Y coordinate of each point on the road surface is equal, and according to the imaging rule of the camera, the world coordinate (X, Y, Z) and the image coordinate (X, Y) of any point P in the space both satisfy the following conversion relation:
wherein H is the vertical height of the optical center of the camera, f is the focal length of the camera,
and (3) arbitrarily taking four groups of points on the fitted curve lane line, and substituting the points into a formula after the coordinate change is completed, wherein each group of three points are:
wherein (a, b) is the center of the curve, R is the corresponding curvature radius,
assuming that the radius of curvature obtained for each set of points is R1,R2,R3And R4In order to prevent an abnormal value from existing in the four obtained radius values, and to determine the subsequent curvature radius with a large error, the abnormal value is detected and eliminated by using the following method:
taking the minimum value R of four radius valuesminAnd calculating the average of the remaining three valuesIf it isThe minimum value is identified as an outlier and the outlier is rejected. Similarly, take the maximum R of the four radius valuesmaxAnd calculating the average of the remaining three valuesIf it is
The maximum value is identified as an outlier, and the outlier is rejected,
after eliminating abnormal values in the four groups of curvature radiuses, averaging the curvature radiuses within the error range to obtain the curvature radius of the curve
(4) According to the parking sight distance and the geometric relation between the curve illumination distance and the curvature radius, a headlamp angle adjustment model is established and solved, and the parking sight distance is calculated according to the following formula:
wherein S is a parking sight distance, v is a running speed, t is a driver reaction time, mu is a friction coefficient between a road surface and a tire, S0In order to be a safe distance away from the vehicle,
from the geometrical relationship between the curve illumination distance and the curve radius, the following can be known:
wherein the content of the first and second substances,is the horizontal corner of the headlamp. The two types are combined to obtain:
The cut-off line curved elbow must not intersect the vehicle center of gravity locus at a distance 100 times the corresponding low beam mounting height from the front of the vehicle, i.e.:
2. The adaptive headlamp steering control algorithm based on lane line detection as claimed in claim 1, wherein the lane line detection in step (2) is performed in a near field region in the region of interest.
3. The adaptive headlamp steering control algorithm based on lane line detection as claimed in claim 1, wherein the curve fitting and the curvature radius calculation in step (3) are both performed in a far field region in the region of interest.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810430833.XA CN108572650B (en) | 2018-05-08 | 2018-05-08 | Adaptive headlamp steering control algorithm based on lane line detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810430833.XA CN108572650B (en) | 2018-05-08 | 2018-05-08 | Adaptive headlamp steering control algorithm based on lane line detection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108572650A CN108572650A (en) | 2018-09-25 |
CN108572650B true CN108572650B (en) | 2021-08-24 |
Family
ID=63571967
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810430833.XA Active CN108572650B (en) | 2018-05-08 | 2018-05-08 | Adaptive headlamp steering control algorithm based on lane line detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108572650B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117830592A (en) * | 2023-12-04 | 2024-04-05 | 广州成至智能机器科技有限公司 | Unmanned aerial vehicle night illumination method, system, equipment and medium based on image |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102490650A (en) * | 2011-12-20 | 2012-06-13 | 奇瑞汽车股份有限公司 | Steering lamp control device for vehicle, automobile and control method |
CN107730520A (en) * | 2017-09-22 | 2018-02-23 | 智车优行科技(北京)有限公司 | Method for detecting lane lines and system |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103192758B (en) * | 2013-04-19 | 2015-02-11 | 北京航空航天大学 | Front lamp following turning control method based on machine vision |
DE102014204614A1 (en) * | 2014-03-12 | 2015-09-17 | Automotive Lighting Reutlingen Gmbh | Method for providing a headlight for a motor vehicle, and a lighting device for a motor vehicle |
US20150294566A1 (en) * | 2014-04-15 | 2015-10-15 | Tomorrow's Transportation Today | Trip planning and management methods for an intelligent transit system with electronic guided buses |
-
2018
- 2018-05-08 CN CN201810430833.XA patent/CN108572650B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102490650A (en) * | 2011-12-20 | 2012-06-13 | 奇瑞汽车股份有限公司 | Steering lamp control device for vehicle, automobile and control method |
CN107730520A (en) * | 2017-09-22 | 2018-02-23 | 智车优行科技(北京)有限公司 | Method for detecting lane lines and system |
Also Published As
Publication number | Publication date |
---|---|
CN108572650A (en) | 2018-09-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108960183B (en) | Curve target identification system and method based on multi-sensor fusion | |
CN105206109B (en) | A kind of vehicle greasy weather identification early warning system and method based on infrared CCD | |
EP3304886B1 (en) | In-vehicle camera system and image processing apparatus | |
JP5809785B2 (en) | Vehicle external recognition device and light distribution control system using the same | |
US9311711B2 (en) | Image processing apparatus and image processing method | |
US9297641B2 (en) | Detection of obstacles at night by analysis of shadows | |
JP5145585B2 (en) | Target detection device | |
CN109190523B (en) | Vehicle detection tracking early warning method based on vision | |
US20130141520A1 (en) | Lane tracking system | |
US20150186733A1 (en) | Three-dimensional object detection device, three-dimensional object detection method | |
CN103020948A (en) | Night image characteristic extraction method in intelligent vehicle-mounted anti-collision pre-warning system | |
JP6723328B2 (en) | Vehicle detection method, night-time vehicle detection method and system based on dynamic light intensity | |
CN102815259B (en) | Regulation method for head lamps, device thereof and driver assistance system | |
US20040061712A1 (en) | Stereoscopic image processing apparatus and the method of processing stereoscopic images | |
JP2861754B2 (en) | Light distribution control device for headlamp | |
CN102865824B (en) | A kind of method and apparatus calculating relative distance between vehicle | |
CN112406687A (en) | 'man-vehicle-road' cooperative programmable matrix headlamp system and method | |
CN104220301A (en) | Method and control device for adapting upper boundary of headlight beam | |
CN107622494B (en) | Night vehicle detection and tracking method facing traffic video | |
CN109229011A (en) | A kind of headlight steering control system and method based on lane detection | |
CN108572650B (en) | Adaptive headlamp steering control algorithm based on lane line detection | |
Mori et al. | Recognition of foggy conditions by in-vehicle camera and millimeter wave radar | |
JP4969359B2 (en) | Moving object recognition device | |
JPH06270733A (en) | Head lamp device for vehicle | |
CN111414857A (en) | Front vehicle detection method based on vision multi-feature fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: 210044 No. 219 Ningliu Road, Jiangbei New District, Nanjing City, Jiangsu Province Applicant after: Nanjing University of Information Science and Technology Address before: 211500 Yuting Square, 59 Wangqiao Road, Liuhe District, Nanjing City, Jiangsu Province Applicant before: Nanjing University of Information Science and Technology |
|
GR01 | Patent grant | ||
GR01 | Patent grant |