CN110084111A - A kind of quick vehicle detection at night method applied to adaptive high beam - Google Patents
A kind of quick vehicle detection at night method applied to adaptive high beam Download PDFInfo
- Publication number
- CN110084111A CN110084111A CN201910208692.1A CN201910208692A CN110084111A CN 110084111 A CN110084111 A CN 110084111A CN 201910208692 A CN201910208692 A CN 201910208692A CN 110084111 A CN110084111 A CN 110084111A
- Authority
- CN
- China
- Prior art keywords
- grid
- bright
- pixel
- doubtful
- car light
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/584—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of quick vehicle detection at night methods applied to adaptive high beam, including step 1: image capture module acquires vehicle road ahead traffic image, and image data information is transferred to image processing module;Step 2: image processing module handles image data information, using Grid Clustering Algorithm, judges doubtful vehicle lamp area;Step 3: halation range being determined using erosion algorithm to doubtful vehicle lamp area, halation color is calculated by fast algorithm, judges headlight and taillight;Step 4: being matched respectively according to geometrical relationship, identify vehicle, calculate vehicle coordinate location information, realize vehicle detection at night;Step 5: the vehicle coordinate information that image processing module is calculated is transferred to distance light lamp control module by data transmission module.The final car light information of image procossing of the present invention acts not only as the control foundation of adaptive high beam, the module of car light information can also be needed to provide support for other.
Description
Technical field
The present invention relates to digital image processing fields, and in particular to a kind of quick night vehicle applied to adaptive high beam
Detection method.
Background technique
Dazzle light as one of important spare part on automobile, mainly night or it is underlit under the conditions of expand the visual field
Range provides enough brightness.However due to some, such as the bad steering habit or new hand's driving vehicle of driver
, long-and-short distant light cannot be switched in time when meeting at night, cause other side driver dazzling, do not see condition of road surface, pole has can
It can result in an automobile accident.Based on this, need to develop a kind of adaptive distance light lamp system, can automatically detect left-hand lane to always vehicle and
Current lane front vehicles automatically adjust the high beam brightness of corresponding region, avoid it is dazzling, improve drive safety, guarantee to drive
The person's of sailing life security.
In adaptive distance light lamp system, it is most important that left-hand lane to always vehicle and current lane front vehicles
Detection.Vehicle detection at night method mainly has based on the method for detecting car light and based on the method for machine learning at present.
Method based on detection car light is exactly mainly car light according to the most apparent feature of vehicle at night, current existing method
In there are the features such as luminance information, shape information and colouring information based on car light to carry out the vehicle detection under night scenes.This
Method is simple, easily extraction feature, but jamming light source is too many, and time-consuming for existing algorithm process jamming light source, and accuracy is not high, no
It is able to satisfy real-time and accuracy requirement, while much the detection of tail-light is studied and is all based on colouring information, but by
In the high brightness of taillight, the practical taillight of camera acquisition display is white, greatly reduces the accuracy of detection.
Method based on machine learning mainly by great amount of samples training, is created that can correctly detect the accurate of vehicle
Model, but in actual life, vehicle is varied, describes vehicle without especially special feature.And detection time-consuming and
Verification and measurement ratio is low to be also a problem to be solved.
Summary of the invention
According to national Specification, dazzle light, dipped headlight is white, and rear lamp is red, and the present invention proposes one kind
Applied to the quick vehicle detection at night system and method for adaptive high beam, it is based primarily upon car light detection vehicle and halation color
Judge headlight and taillight, and time-consuming currently based on image procossing present in car light detection algorithm for solution, accuracy is not high, no
The problems such as being able to satisfy real-time and accuracy requirement.
The present invention achieves the above technical objects by the following technical means.
A kind of quick vehicle detection at night system applied to adaptive high beam, including image capture module, at image
Manage module, data transmission module.
Described image acquisition module is used to acquire vehicle road ahead traffic image, and image information is transferred to the figure
As processing module, described image processing module is used to receive the image information of described image acquisition module acquisition, and using specific
Built-in algorithm calculated, obtain the location coordinate information of other vehicles of front, the data transmission module is by image procossing
The vehicle coordinate information that module is calculated is transferred to distance light lamp control module.
A kind of quick vehicle detection at night method applied to adaptive high beam, comprising the following steps:
Step 1: image capture module acquires vehicle road ahead traffic image, and image data information is transferred to image
Processing module;
Step 2: image processing module handles image data information, using Grid Clustering Algorithm, judges doubtful vehicle
Lamp region;
Further, the step 2 is specific as follows:
Step 2.1: to image capture module acquired image information imgsrcMirror image processing is carried out, mirror image data is obtained
imgcpy, and be stored in memory, to mirror image data imgcpyRegion relevant to car light distribution is extracted in pretreatment, removes sky
Remember that all grid sets are grid, i.e. grid=[grid and to relevant range grid division with the extraneous areas such as ground1,
grid2... ..., gridm]。
Step 2.2: calculating mirror image data imgcpyThe gray value G of pixel in relevant range, is judged (T with threshold value T
Tested and determined by different images acquisition module), the background of gray scale G < T is removed, the high bright spot of gray value and halation point are left, is calculated
The number of bright pixel point in each grid, if the number num of bright pixel point is more than setting value min_num, (min_num is according to division
Sizing grid determine), judge this grid for bright grid, remember that all bright grid sets are bright_grid, i.e. bright_
Grid=[bright_grid1, bright_grid2... ..., bright_gridn]。
Further, the calculation formula of the gray value G of the pixel is as follows:
G=(R+G<<1+B)>>2, R in formula, G, B are the values in three channels of pixel red, green, blue.
Step 2.3: using Grid Clustering Algorithm, all bright grids are subjected to clustering processing, find institute in relevant range
There is hot spot, remembers that the collection of all classes is combined into cluster, i.e. cluster=[cluster1, cluster2... ..., clusterj]
Further, the Grid Clustering Algorithm detailed process is as follows:
Step 2.3.1: any to choose a no processed bright grid bright_grid in set bright_gridi
As boundary fitting, labeled as processed;
Step 2.3.2: calculating all grids in the epsilon neighborhood of boundary fitting, judges whether to be bright grid;
Step 2.3.3: if so, executing step 2.3.4;If not being, a kind of end of clustering executes step 2.3.5;
Step 2.3.4: this grid and boundary fitting, which are merged, becomes one kind, marks this bright grid processed, and by this net
Lattice go to step 2.3.2 as boundary fitting;
Step 2.3.5: judging whether all bright grids are processed in set bright_grid, if so, terminating cluster;
If it is not, executing step 2.3.1.
Further, the epsilon neighborhood meets expression formula: Nε(bright_gridi)={ y | y ∈ grid:d (gridx,
bright_gridi)≤ε }, grid is all grid sets, d (grid in formulax,bright_gridi) indicate boundary fitting
bright_gridiWith arbitrary mess gridxThe distance between.
Step 2.4: the area of each cluster is calculated, if area S meets formula S1<S<S2(S1And S2According to camera pixel
Determined with focal length), it is determined as that doubtful car light, area in the hot spot of this range, are not determined as jamming light source.
Remember that the collection of all doubtful car lights is combined into sus_light, i.e. sus_light=[sus_light1, sus_
light2... ..., sus_lightk]
Step 3: halation range being determined using erosion algorithm to doubtful vehicle lamp area, halation face is calculated by fast algorithm
Color judges headlight and taillight;
Further, the halation range of the doubtful vehicle lamp area and color determine that process is as follows:
Step 3.1: sequence chooses untreated doubtful car light sus_light in set sus_lighti, labeled as having located
Reason, to doubtful car light sus_lightiUsing erosion algorithm, corroding number times is 3 times;
Step 3.2: HSI data information being converted to using fast algorithm to the image edge pixels point after corrosion, carries out face
Color judgement, note white pixel points are white_num, and the total pixel number in edge is pixel_num, if white_num > 2/3*
Pixel_num executes step 3.3, no to then follow the steps 3.4;
Step 3.3: corrosion range is halation range, to original image imgsrcIn corresponding car light halation range data letter
Breath is converted to HSI data information using fast algorithm, and note red pixel points are red_num, and the total pixel number of halation range is
Pixel_sum is judged as taillight if red_num > 2/3*pixel_sum, is otherwise judged as headlight, executes step 3.5;
Step 3.4: etching operation being continued to image, corrodes number times+1, if corrosion number times > 10, hold
Row step 3.3, it is no to then follow the steps 3.2;
Step 3.5: judging whether all doubtful car lights are processed in set sus_light, if so, terminating corrosion;If
It is not to execute step 3.1.
Further, the erosion algorithm formula:X is original image, and S is structural element object,
X is the image after corrosion.
Further, described to original image imgsrcIn corresponding car light halation range data information be converted to HSI data letter
Breath, calculation formula are as follows:
Wherein
Wherein, H is pixel tone value, and S is pixel intensity value, and I is pixel brightness value, and R, G, B is pixel
The value in three channels of red, green, blue.
Further, the fast algorithm calculates θ value with the method for interpolation using tabling look-up;
Further, the judgement pixel color is as follows for red constraint inequality:
Step 4: being matched respectively according to geometrical relationship, identify vehicle, calculate vehicle coordinate location information, realize night
Vehicle detection;
Further, described according to geometrical relationship is to each doubtful car light size, and distance, projected area is calculated, right
Doubtful headlight is matched with doubtful headlight, and doubtful taillight is matched with doubtful taillight.Save two car lights of successful matching
Information in the picture, including position coordinates, the information such as car light attribute give up the facula information of pairing failure.
Further, described to determine that formula is as follows according to doubtful car light size:
K1*(X2-X1)≤X4-X3≤K2*(X2-X1)
K3*(Y2-Y1)≤Y4-Y3≤K4*(Y2-Y1)
K in formula1, K2, K3, K4It is car light magnitude range coefficient, X1, X2It is to carry out matched first doubtful car light or so to sit
Mark, X3, X4It is to carry out matched second doubtful car light or so coordinate, Y1, Y2It is to carry out matched first doubtful car light or more
Coordinate, Y3, Y4It is to carry out matched second doubtful car light or so coordinate.
Further, described as follows according to doubtful car light range estimation formula:
L1≤X3-X2≤L2
L in formula1, L2It is car light distance range upper lower limit value;
Further, described to determine that formula is as follows according to doubtful car light projected area:
Y4-Y1≥S1*(Y3-Y2)
S in formula1It is car light projected area range factor;
Step 5: the vehicle coordinate information that image processing module is calculated is transferred to high beam control by data transmission module
Molding block.
The beneficial effects of the present invention are:
Using the hot spot in the clustering algorithm identification image based on grid, sizing grid can artificially be set according to required precision
It is fixed, under the premise of meeting required precision, cluster speed is improved to greatest extent;Hot spot halation range is determined using erosion algorithm,
And color judgement is carried out to hot spot halation using fast algorithm, headlight and taillight are distinguished, is eliminated due to taillight night brightness
Height is judged by accident caused by white is presented on the image that camera acquires, while guaranteeing the real-time of image procossing;Using data
Transmission module, the final car light information of image procossing act not only as the control foundation of adaptive high beam, can also be it
He needs the module of car light information to provide support.
Detailed description of the invention
Fig. 1 is detection system block diagram of the present invention.
Fig. 2 is detection method flow chart of the present invention.
Fig. 3 is Grid Clustering Algorithm exemplary diagram of the present invention.
Fig. 4 is that erosion algorithm example and halation range of the present invention determine figure.
Fig. 5 is according to geometrical rule car light pairing figure.
Specific embodiment
The present invention is further illustrated with example with reference to the accompanying drawing.
As shown in Figure 1, a kind of quick vehicle detection at night system applied to adaptive high beam, including Image Acquisition mould
Block, image processing module, data transmission module.
Described image acquisition module is used to acquire vehicle road ahead traffic image, and image information is transferred to the figure
As processing module, described image processing module is used to receive the image information of described image acquisition module acquisition, and using specific
Built-in algorithm calculated, obtain the location coordinate information of other vehicles of front, the data transmission module is by image procossing
The vehicle coordinate information that module is calculated is transferred to distance light lamp control module.
As shown in Fig. 2, a kind of quick vehicle detection at night method applied to adaptive high beam, the specific steps are as follows:
Step 1: image capture module acquires vehicle road ahead traffic image, and image data information is transferred to image
Processing module.
Step 2: image processing module handles image data information, using Grid Clustering Algorithm, judges doubtful vehicle
Lamp region;
Further, the step 2 is specific as follows:
Step 2.1: to image capture module acquired image information imgsrcMirror image processing is carried out, mirror image data is obtained
imgcpy, and be stored in memory, to mirror image data imgcpyRegion relevant to car light distribution is extracted in pretreatment, removes sky
Remember that all grid sets are grid, i.e. grid=[grid and to relevant range grid division with the extraneous areas such as ground1,
grid2... ..., gridm]。
Step 2.2: calculating mirror image data imgcpyThe gray value G of pixel in relevant range, is judged (T with threshold value T
Tested and determined by different images acquisition module), the background of gray scale G < T is removed, the high bright spot of gray value and halation point are left, is calculated
The number of bright pixel point in each grid, if the number num of bright pixel point is more than setting value min_num, (min_num is according to division
Sizing grid determine), judge this grid for bright grid, remember that all bright grid sets are bright_grid, i.e. bright_
Grid=[bright_grid1, bright_grid2... ..., bright_gridn]。
Further, the calculation formula of the gray value G of the pixel is as follows:
G=(R+G<<1+B)>>2, R in formula, G, B are the values in three channels of pixel red, green, blue.
Step 2.3: as shown in figure 3, all bright grids are carried out clustering processing, find phase using Grid Clustering Algorithm
All hot spots in region are closed, remember that the collection of all classes is combined into cluster, i.e. cluster=[cluster1, cluster2... ...,
clusterj]。
Further, the Grid Clustering Algorithm detailed process is as follows:
Step 2.3.1: any to choose a no processed bright grid bright_grid in set bright_gridi
As boundary fitting, labeled as processed;
Step 2.3.2: calculating all grids in the epsilon neighborhood of boundary fitting, judges whether to be bright grid;
Step 2.3.3: if so, executing step 2.3.4;If not being, a kind of end of clustering executes step 2.3.5;
Step 2.3.4: this grid and boundary fitting, which are merged, becomes one kind, marks this bright grid processed, and by this net
Lattice go to step 2.3.2 as boundary fitting;
Step 2.3.5: judging whether all bright grids are processed in set bright_grid, if so, terminating cluster;
If it is not, executing step 2.3.1.
Further, the epsilon neighborhood meets expression formula: Nε(bright_gridi)={ y | y ∈ grid:d (gridx,
bright_gridi)≤ε }, grid is all grid sets, d (grid in formulax,bright_gridi) indicate boundary fitting
bright_gridiWith arbitrary mess gridxThe distance between.
Step 2.4: the area of each cluster is calculated, if area S meets formula S1<S<S2(S1And S2According to camera pixel
Determined with focal length), it is determined as that doubtful car light, area in the hot spot of this range, are not determined as jamming light source.
Remember that the collection of all doubtful car lights is combined into sus_light, i.e. sus_light=[sus_light1, sus_
light2... ..., sus_lightk]
Step 3: as shown in figure 4, determining halation range using erosion algorithm to doubtful vehicle lamp area, passing through fast algorithm meter
Halation color is calculated, judges headlight and taillight;
Further, the halation range of the doubtful vehicle lamp area and color determine that process is as follows:
Step 3.1: sequence chooses untreated doubtful car light sus_light in set sus_lighti, labeled as having located
Reason, to doubtful car light sus_lightiUsing erosion algorithm, corroding number times is 3 times;
Step 3.2: HSI data information being converted to using fast algorithm to the image edge pixels point after corrosion, carries out face
Color judgement, note white pixel points are white_num, and the total pixel number in edge is pixel_num, if white_num > 2/3*
Pixel_num executes step 3.3, no to then follow the steps 3.4;
Step 3.3: corrosion range is halation range, to original image imgsrcIn corresponding car light halation range data letter
Breath is converted to HSI data information using fast algorithm, and note red pixel points are red_num, and the total pixel number of halation range is
Pixel_sum is judged as taillight if red_num > 2/3*pixel_sum, is otherwise judged as headlight, executes step 3.5;
Step 3.4: etching operation being continued to image, corrodes number times+1, if corrosion number times > 10, hold
Row step 3.3, it is no to then follow the steps 3.2;
Step 3.5: judging whether all doubtful car lights are processed in set sus_light, if so, terminating corrosion;If
It is not to execute step 3.1.
Further, the erosion algorithm formula:X is original image, and S is structural element object,
X is the image after corrosion.
Further, described to original image imgsrcIn corresponding car light halation range data information be converted to HSI data letter
Breath, calculation formula are as follows:
Wherein
Wherein, H is pixel tone value, and S is pixel intensity value, and I is pixel brightness value, and R, G, B is pixel
The value in three channels of red, green, blue.
Further, the fast algorithm calculates θ value with the method for interpolation using tabling look-up;
Further, the judgement pixel color is as follows for red constraint inequality:
Step 4: as shown in figure 5, being matched respectively according to geometrical relationship, identifying vehicle, calculate vehicle coordinate position letter
Breath realizes vehicle detection at night;
Further, described according to geometrical relationship is to each doubtful car light size, and distance, projected area is calculated, right
Doubtful headlight is matched with doubtful headlight, and doubtful taillight is matched with doubtful taillight.Save two car lights of successful matching
Information in the picture, including position coordinates, the information such as car light attribute give up the facula information of pairing failure.
Further, described to determine that formula is as follows according to doubtful car light size:
K1*(X2-X1)≤X4-X3≤K2*(X2-X1)
K3*(Y2-Y1)≤Y4-Y3≤K4*(Y2-Y1)
K in formula1, K2, K3, K4It is car light magnitude range coefficient, X1, X2It is to carry out matched first doubtful car light or so to sit
Mark, X3, X4It is to carry out matched second doubtful car light or so coordinate, Y1, Y2It is to carry out matched first doubtful car light or more
Coordinate, Y3, Y4It is to carry out matched second doubtful car light or so coordinate.
Further, described as follows according to doubtful car light range estimation formula:
L1≤X3-X2≤L2
L in formula1, L2It is car light distance range upper lower limit value;
Further, described to determine that formula is as follows according to doubtful car light projected area:
Y4-Y1≥S1*(Y3-Y2)
S in formula1It is car light projected area range factor;
Step 5: the vehicle coordinate information that image processing module is calculated is transferred to high beam control by data transmission module
Molding block
In the description of this specification, reference term " one embodiment ", " some embodiments ", " illustrative examples ",
The description of " example ", " specific example " or " some examples " etc. means specific features described in conjunction with this embodiment or example, knot
Structure, material or feature are included at least one embodiment or example of the invention.In the present specification, to above-mentioned term
Schematic representation may not refer to the same embodiment or example.Moreover, specific features, structure, material or the spy of description
Point can be combined in any suitable manner in any one or more of the embodiments or examples.
Although an embodiment of the present invention has been shown and described, it will be understood by those skilled in the art that: not
A variety of change, modification, replacement and modification can be carried out to these embodiments in the case where being detached from the principle of the present invention and objective, this
The range of invention is defined by the claims and their equivalents.
Claims (10)
1. a kind of quick vehicle detection at night method applied to adaptive high beam, which comprises the following steps:
Step 1: image capture module acquires vehicle road ahead traffic image, and image data information is transferred to image procossing
Module;Step 2: image processing module handles image data information, using Grid Clustering Algorithm, judges doubtful car light area
Domain;Step 3: halation range being determined using erosion algorithm to doubtful vehicle lamp area, halation color, judgement are calculated by fast algorithm
Headlight and taillight;Step 4: being matched respectively according to geometrical relationship, identify vehicle, calculate vehicle coordinate location information, realized
Vehicle detection at night;Step 5: the vehicle coordinate information that image processing module is calculated is transferred to distance light by data transmission module
Lamp control module.
2. a kind of quick vehicle detection at night method applied to adaptive high beam according to claim 1, feature
It is, the step 2 is specific as follows:
Step 2.1: to image capture module acquired image information imgsrcMirror image processing is carried out, mirror image data is obtained
imgcpy, and be stored in memory, to mirror image data imgcpyRegion relevant to car light distribution is extracted in pretreatment, removes sky
Remember that all grid sets are grid, i.e. grid=[grid and to relevant range grid division with the extraneous areas such as ground1,
grid2... ..., gridm];
Step 2.2: calculating mirror image data imgcpyThe gray value G of pixel, is judged with threshold value T, goes ash disposal in relevant range
The background for spending G < T, leaves the high bright spot of gray value and halation point, the number of bright pixel point in each grid is calculated, if bright pixel
The number num of point is more than setting value min_num, judges this grid for bright grid, remembers that all bright grid sets are bright_
Grid, i.e. bright_grid=[bright_grid1, bright_grid2... ..., bright_gridn];
Step 2.3: using Grid Clustering Algorithm, all bright grids are subjected to clustering processing, find all light in relevant range
Spot;Remember that the collection of all classes is combined into cluster, i.e. cluster=[cluster1, cluster2... ..., clusterj];
Step 2.4: the area of each cluster is calculated, if area S meets formula S1<S<S2(S1 and S2 are according to camera pixel and coke
Away from determination), it is determined as that doubtful car light, area in the hot spot of this range, are not determined as jamming light source.
Remember that the collection of all doubtful car lights is combined into sus_light, i.e. sus_light=[sus_light1, sus_light2... ...,
sus_lightk]。
3. a kind of quick vehicle detection at night method applied to adaptive high beam according to claim 2, feature
It is, the calculation formula of the gray value G of the pixel is as follows:
G=(R+G<<1+B)>>2, R in formula, G, B are the values in three channels of pixel red, green, blue.
4. a kind of quick vehicle detection at night method applied to adaptive high beam according to claim 2, feature
It is, the Grid Clustering Algorithm detailed process is as follows:
Step 2.3.1: any to choose a no processed bright grid bright_grid in set bright_gridiAs
Boundary fitting, labeled as processed;
Step 2.3.2: calculating all grids in the epsilon neighborhood of boundary fitting, judges whether to be bright grid;
Step 2.3.3: if so, executing step 2.3.4;If not being, a kind of end of clustering executes step 2.3.5;
Step 2.3.4: this grid and boundary fitting, which are merged, becomes one kind, marks this bright grid processed, and this grid is made
For boundary fitting, step 2.3.2 is gone to;
Step 2.3.5: judging whether all bright grids are processed in set bright_grid, if so, terminating cluster;If no
It is to execute step 2.3.1.
5. a kind of quick vehicle detection at night method applied to adaptive high beam according to claim 4, feature
It is, the epsilon neighborhood meets expression formula: Nε(bright_gridi)={ y | y ∈ grid:d (gridx,bright_gridi)≤
ε }, grid is all grid sets, d (grid in formulax,bright_gridi) indicate boundary fitting bright_gridiWith it is any
Grid gridxThe distance between.
6. a kind of quick vehicle detection at night method applied to adaptive high beam according to claim 1, feature
It is, the step 3 is specific as follows:
Step 3.1: sequence chooses untreated doubtful car light sus_light in set sus_lighti, right labeled as processed
Doubtful car light sus_lightiUsing erosion algorithm, corroding number times is 3 times;
Step 3.2: HSI data information being converted to using fast algorithm to the image edge pixels point after corrosion, color is carried out and sentences
Disconnected, note white pixel points are white_num, and the total pixel number in edge is pixel_num, if white_num > 2/3*pixel_
Num executes step 3.3, no to then follow the steps 3.4;
Step 3.3: corrosion range is halation range, to original image imgsrcIn corresponding car light halation range data information adopt
It is converted to HSI data information with fast algorithm, note red pixel points are red_num, and the total pixel number of halation range is
Pixel_sum is judged as taillight if red_num > 2/3*pixel_sum, is otherwise judged as headlight, executes step 3.5;
Step 3.4: etching operation being continued to image, corrodes number times+1, if corrosion number times > 10, execute step
Rapid 3.3, it is no to then follow the steps 3.2;
Step 3.5: judging whether all doubtful car lights are processed in set sus_light, if so, terminating corrosion;If it is not,
Execute step 3.1.
7. a kind of quick vehicle detection at night method applied to adaptive high beam according to claim 6, feature
It is, the erosion algorithm formula:Wherein, X is original image, and S is structural element object, and x is corruption
Image after erosion.
8. a kind of quick vehicle detection at night method applied to adaptive high beam according to claim 6, feature
It is, it is described to original image imgsrcIn corresponding car light halation range data information be converted to HSI data information, calculation formula
It is as follows:
Wherein
Wherein, H is pixel tone value, and S is pixel intensity value, and I is pixel brightness value, R, G, B be pixel it is red,
The value in green, blue three channels.
9. a kind of quick vehicle detection at night method applied to adaptive high beam according to claim 6, feature
It is, the fast algorithm calculates θ value with the method for interpolation using tabling look-up.
The judgement pixel color is as follows for red constraint inequality:
10. a kind of quick vehicle detection at night method applied to adaptive high beam according to claim 1, feature
Be, be to each doubtful car light size, distance according to geometrical relationship, projected area is calculated, to doubtful headlight with it is doubtful
Headlight is matched, and doubtful taillight is matched with doubtful taillight.The information of two car lights of successful matching in the picture is saved,
Including position coordinates, the information such as car light attribute give up the facula information of pairing failure.
It is described to determine that formula is as follows according to doubtful car light size:
K1*(X2-X1)≤X4-X3≤K2*(X2-X1)
K3*(Y2-Y1)≤Y4-Y3≤K4*(Y2-Y1)
K in formula1, K2, K3, K4It is car light magnitude range coefficient, X1, X2It is to carry out matched first doubtful car light or so coordinate,
X3, X4It is to carry out matched second doubtful car light or so coordinate, Y1, Y2It is to carry out matched first doubtful car light to sit up and down
Mark, Y3, Y4It is to carry out matched second doubtful car light or so coordinate.
It is described as follows according to doubtful car light range estimation formula:
L1≤X3-X2≤L2
L in formula1, L2It is car light distance range upper lower limit value;
It is described to determine that formula is as follows according to doubtful car light projected area:
Y4-Y1≥S1*(Y3-Y2)
S in formula1It is car light projected area range factor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910208692.1A CN110084111B (en) | 2019-03-19 | 2019-03-19 | Rapid night vehicle detection method applied to self-adaptive high beam |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910208692.1A CN110084111B (en) | 2019-03-19 | 2019-03-19 | Rapid night vehicle detection method applied to self-adaptive high beam |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110084111A true CN110084111A (en) | 2019-08-02 |
CN110084111B CN110084111B (en) | 2023-08-25 |
Family
ID=67413312
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910208692.1A Active CN110084111B (en) | 2019-03-19 | 2019-03-19 | Rapid night vehicle detection method applied to self-adaptive high beam |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110084111B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110688907A (en) * | 2019-09-04 | 2020-01-14 | 火丁智能照明(广东)有限公司 | Method and device for identifying object based on road light source at night |
CN112504638A (en) * | 2020-12-16 | 2021-03-16 | 上汽通用汽车有限公司 | Method and device for testing adaptive high beam system |
CN112651269A (en) * | 2019-10-12 | 2021-04-13 | 常州通宝光电股份有限公司 | Method for rapidly detecting vehicles in front in same direction at night |
CN112927502A (en) * | 2021-01-21 | 2021-06-08 | 广州小鹏自动驾驶科技有限公司 | Data processing method and device |
CN113129375A (en) * | 2021-04-21 | 2021-07-16 | 阿波罗智联(北京)科技有限公司 | Data processing method, device, equipment and storage medium |
CN114184358A (en) * | 2021-12-21 | 2022-03-15 | 上汽通用汽车有限公司 | Performance calibration verification method and system for vehicle adaptive high beam |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101727748A (en) * | 2009-11-30 | 2010-06-09 | 北京中星微电子有限公司 | Method, system and equipment for monitoring vehicles based on vehicle taillight detection |
CN101739827A (en) * | 2009-11-24 | 2010-06-16 | 北京中星微电子有限公司 | Vehicle detecting and tracking method and device |
CN103208185A (en) * | 2013-03-19 | 2013-07-17 | 东南大学 | Method and system for nighttime vehicle detection on basis of vehicle light identification |
-
2019
- 2019-03-19 CN CN201910208692.1A patent/CN110084111B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101739827A (en) * | 2009-11-24 | 2010-06-16 | 北京中星微电子有限公司 | Vehicle detecting and tracking method and device |
CN101727748A (en) * | 2009-11-30 | 2010-06-09 | 北京中星微电子有限公司 | Method, system and equipment for monitoring vehicles based on vehicle taillight detection |
CN103208185A (en) * | 2013-03-19 | 2013-07-17 | 东南大学 | Method and system for nighttime vehicle detection on basis of vehicle light identification |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110688907A (en) * | 2019-09-04 | 2020-01-14 | 火丁智能照明(广东)有限公司 | Method and device for identifying object based on road light source at night |
CN110688907B (en) * | 2019-09-04 | 2024-01-23 | 火丁智能照明(广东)有限公司 | Method and device for identifying object based on night road light source |
CN112651269A (en) * | 2019-10-12 | 2021-04-13 | 常州通宝光电股份有限公司 | Method for rapidly detecting vehicles in front in same direction at night |
CN112651269B (en) * | 2019-10-12 | 2024-05-24 | 常州通宝光电股份有限公司 | Method for rapidly detecting forward same-direction vehicles at night |
CN112504638A (en) * | 2020-12-16 | 2021-03-16 | 上汽通用汽车有限公司 | Method and device for testing adaptive high beam system |
CN112504638B (en) * | 2020-12-16 | 2023-05-26 | 上汽通用汽车有限公司 | Test method and device of self-adaptive high beam system |
CN112927502A (en) * | 2021-01-21 | 2021-06-08 | 广州小鹏自动驾驶科技有限公司 | Data processing method and device |
CN113129375A (en) * | 2021-04-21 | 2021-07-16 | 阿波罗智联(北京)科技有限公司 | Data processing method, device, equipment and storage medium |
CN113129375B (en) * | 2021-04-21 | 2023-12-01 | 阿波罗智联(北京)科技有限公司 | Data processing method, device, equipment and storage medium |
CN114184358A (en) * | 2021-12-21 | 2022-03-15 | 上汽通用汽车有限公司 | Performance calibration verification method and system for vehicle adaptive high beam |
Also Published As
Publication number | Publication date |
---|---|
CN110084111B (en) | 2023-08-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110084111A (en) | A kind of quick vehicle detection at night method applied to adaptive high beam | |
CN107729818B (en) | Multi-feature fusion vehicle re-identification method based on deep learning | |
CN107729801B (en) | Vehicle color recognition system based on multitask deep convolution neural network | |
US10970566B2 (en) | Lane line detection method and apparatus | |
CN105260699B (en) | A kind of processing method and processing device of lane line data | |
CN110069986B (en) | Traffic signal lamp identification method and system based on hybrid model | |
CN107766821B (en) | Method and system for detecting and tracking full-time vehicle in video based on Kalman filtering and deep learning | |
CN106128115B (en) | Fusion method for detecting road traffic information based on double cameras | |
CN107315095B (en) | More vehicle automatic speed-measuring methods with illumination adaptability based on video processing | |
CN105608455B (en) | A kind of license plate sloped correcting method and device | |
CN110450706B (en) | Self-adaptive high beam control system and image processing algorithm | |
CN108357418B (en) | Preceding vehicle driving intention analysis method based on tail lamp identification | |
CN102963294B (en) | Method for judging opening and closing states of high beam of vehicle driving at night | |
CN103034836B (en) | Road sign detection method and road sign checkout equipment | |
Li et al. | Nighttime lane markings recognition based on Canny detection and Hough transform | |
CN105373794A (en) | Vehicle license plate recognition method | |
JP2012173879A (en) | Traffic signal detection apparatus and program therefor | |
CN113449632B (en) | Vision and radar perception algorithm optimization method and system based on fusion perception and automobile | |
WO2024051296A1 (en) | Method and apparatus for obstacle detection in complex weather | |
CN105678318B (en) | The matching process and device of traffic sign | |
CN109919062A (en) | A kind of road scene weather recognition methods based on characteristic quantity fusion | |
CN110688907A (en) | Method and device for identifying object based on road light source at night | |
CN104866838B (en) | A kind of front vehicles automatic testing method of view-based access control model | |
CN109800693B (en) | Night vehicle detection method based on color channel mixing characteristics | |
CN104008518B (en) | Body detection device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |