CN103164685B - Vehicle lamp detection method and car light detection device - Google Patents

Vehicle lamp detection method and car light detection device Download PDF

Info

Publication number
CN103164685B
CN103164685B CN201110409405.7A CN201110409405A CN103164685B CN 103164685 B CN103164685 B CN 103164685B CN 201110409405 A CN201110409405 A CN 201110409405A CN 103164685 B CN103164685 B CN 103164685B
Authority
CN
China
Prior art keywords
car light
item
fused images
image
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201110409405.7A
Other languages
Chinese (zh)
Other versions
CN103164685A (en
Inventor
王晓萌
刘丽艳
胡平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ricoh Co Ltd
Original Assignee
Ricoh Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ricoh Co Ltd filed Critical Ricoh Co Ltd
Priority to CN201110409405.7A priority Critical patent/CN103164685B/en
Publication of CN103164685A publication Critical patent/CN103164685A/en
Application granted granted Critical
Publication of CN103164685B publication Critical patent/CN103164685B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

There is provided a kind of vehicle lamp detection method, comprising: obtaining step, interval synchronously obtains each corresponding polarisation gray level image and normal gray level image on schedule; Fusion steps, the polarisation gray level image corresponding by each and normal gray level image are fused to each fused images highlighting light; Car light candidate extraction step, respectively for each fused images, by extracting car light candidate item based on the connected domain analysis of brightness; Car light candidate association step, respectively for continuous print two fused images, be associated between the car light item of last fused images and the car light candidate item of car light candidate item and a rear fused images, the car light candidate item be associated with the car light item of last fused images or car light candidate item in a rear fused images is labeled as car light item; Export step, respectively for continuous print two fused images, the car light item exporting a rear fused images and mark.Also correspondingly provide a kind of car light detection device.

Description

Vehicle lamp detection method and car light detection device
Technical field
The present invention relates to a kind of vehicle lamp detection method and car light detection device.
Background technology
Along with the growing and widespread use of intelligent transportation system (ITS) technology, vehicle Automatic Measurement Technique also progressively develops.Wherein, under the occasion of the available light dimness at such as night etc., vehicle can be determined by detecting lights of vehicle.For the scene at such as night, vehicle testing techniques has obtained certain progress.
Patent document 1 (US7512252B2) provides a kind of method of vehicle detection at night.The method utilizes the image obtained by a video camera, detects head lamp and the taillight of vehicle wherein, and such as, by car light and other noise light, signal lamp, the difference such as street lamp is come.But the method that this patent describes is based on two width coloured images, and wherein long exposure image is for detecting white light source, and short exposed images, for detecting red light source, utilizes two width images to be used for detecting different light.But the method is difficult to the light distinguishing taillight and reflecting plate.
Patent document 2 (US7949190B2) provides a kind of vehicle detection at night method.The method is detected by car light, distinguishes and identify opposite vehicle and front vehicles at night, thus provides supplementary for analyzing the situation in Current vehicle front for driver.The method of this patent, based on single image, make use of the process of binaryzation and connected domain analysis.But, in that patent, in order to remove false-alarm, namely be actually the vehicle candidate item of non-vehicle, only only used horizontal line and differentiate, with the horizontal line of height a certain in image for benchmark, the candidate item be only positioned under this horizontal line is considered to car light, so that False Rate is higher.
Patent document 3 (US20070182623A1) provides a kind of method can carrying out on-line calibration to multiple objects location sensor.The method is based on object trajectory, and each sensor can calculate three geometric parameters, and wherein two parameters are used for location, and another parameter is used for position correction.In the method, objects location sensor is used for calibration operation and object output track, does not make full use of inter-frame information, does not consider the process of removing false-alarm.
Above-mentioned art methods all can not ensure not only to realize high detection rate but also have low false drop rate when vehicle detection at night.Only use common gray level image, then the non-vehicle light in image, the light, buildings light etc. of such as reflecting plate can not make a distinction with real car light well, therefore cause the high false drop rate of car light.Even if refer to use two width coloured image in some patents to carry out car light detection, but also just utilize the difference of exposure length, every width image is respectively used to detect dissimilar car light, such as head lamp, taillight etc., so that verification and measurement ratio is low and false drop rate is high.
Summary of the invention
The present invention is made in view of the above-mentioned problems in the prior art, the embodiment of the present invention relates to a kind of vehicle lamp detection method and car light detection device, more particularly, provide a kind of polarized light image that can utilize and carry out the technology of vehicle detection at night, can be used in DAS (Driver Assistant System).
According to an aspect of the present invention, there is provided a kind of vehicle lamp detection method, comprising: obtaining step, interval synchronously obtains each polarisation gray level image and normal gray level image on schedule, wherein, the polarisation gray level image simultaneously obtained is corresponding with normal gray level image; Fusion steps, the polarisation gray level image corresponding by each and normal gray level image are fused to each fused images highlighting light; Car light candidate extraction step, respectively for each fused images, by extracting car light candidate item based on the connected domain analysis of brightness; Car light candidate association step, respectively for continuous print two fused images, be associated between the car light item of last fused images and the car light candidate item of car light candidate item and a rear fused images, the car light candidate item be associated with the car light item of last fused images or car light candidate item in a rear fused images is labeled as car light item; Export step, respectively for continuous print two fused images, the car light item exporting a rear fused images and mark.
According to another aspect of the present invention, there is provided a kind of car light detection device, comprising: acquisition device, interval synchronously obtains each polarisation gray level image and normal gray level image on schedule, wherein, the polarisation gray level image simultaneously obtained is corresponding with normal gray level image; Fusing device, the polarisation gray level image corresponding by each and normal gray level image are fused to each fused images highlighting light; Car light candidate extraction means, respectively for each fused images, by extracting car light candidate item based on the connected domain analysis of brightness; Car light candidate association device, respectively for continuous print two fused images, be associated between the car light item of last fused images and the car light candidate item of car light candidate item and a rear fused images, the car light candidate item be associated with the car light item of last fused images or car light candidate item in a rear fused images is labeled as car light item; Output unit, respectively for continuous print two fused images, the car light item exporting a rear fused images and mark.
Utilize vehicle lamp detection method and the car light detection device of the embodiment of the present invention, compared with patent document 1, by carrying out vehicle detection based on a single width gray level image and a width polarized light image, by analyzing the difference of two width images, well the light of taillight and reflecting plate can be distinguished and coming; Compared with patent document 2, the light of the embodiment of the present invention by utilizing polarized light image to distinguish reflecting plate, also utilizes inter-frame information to remove the impact of noise light simultaneously, ensures high detection rate and low false drop rate simultaneously; Compared with patent document 3, although both have employed means of tracking, but in embodiments of the present invention, inter-frame information is fully utilized thus eliminates the impact that noise causes.
By reading the detailed description of the following the preferred embodiments of the present invention considered by reference to the accompanying drawings, above and other target of the present invention, feature, advantage and technology and industrial significance will be understood better.
Accompanying drawing explanation
Fig. 1 schematically shows the applied environment wherein can applied according to the vehicle lamp detection method of the embodiment of the present invention and the drive assist system of car light detection device.
Fig. 2 illustrates the overview flow chart of the vehicle lamp detection method according to the embodiment of the present invention.
Fig. 3 illustrates the process flow diagram of the fusion steps according to the embodiment of the present invention.
Fig. 4 comprises Fig. 4 A to 4F, shows the exemplifying embodiment of described fusion steps, and wherein, Fig. 4 A shows an example of polarisation gray level image; Fig. 4 B shows an example of the normal gray level image corresponding with this polarisation gray level image; Fig. 4 C illustrates that shown in corresponding Fig. 4 B, shown in normal gray level image and Fig. 4 A, polarisation gray level image subtracts each other the example of the difference image of gained; Fig. 4 D illustrates the example of the first binary image difference image shown in Fig. 4 C being carried out to binary conversion treatment gained; Fig. 4 E illustrates the superimposed image of the superimposed gained of normal gray level image shown in the first binary image shown in Fig. 4 D and Fig. 4 B; Fig. 4 F illustrates the fused images after smothing filtering to the smoothing operation gained of superimposed image as shown in Figure 4 E.
Fig. 5 illustrates the process flow diagram of the car light candidate extraction step S300 according to the embodiment of the present invention.
Fig. 6 comprises Fig. 6 A and Fig. 6 B, and show connected domain coupling step carries out matching treatment effect signal to connected domain, wherein, Fig. 6 A illustrates the second binary image example of not carrying out connected domain coupling; Fig. 6 B shows the result of being mated candidate's connected domain by above-mentioned connected domain coupling step.
Fig. 7 comprises Fig. 7 A and Fig. 7 B, and the example of being deleted car light candidate item by the second delete step is shown respectively.
Fig. 8 illustrates the general frame of the car light detection device according to the embodiment of the present invention.
Embodiment
Below in conjunction with accompanying drawing, the embodiment of the present invention is described.
Fig. 1 schematically shows the applied environment wherein can applied according to the vehicle lamp detection method of the embodiment of the present invention and the drive assist system of car light detection device.
As shown in Figure 1, drive assist system 2 is installed on vehicle 1, drive assist system 2 comprises the video camera 4 be arranged on vehicle 1, the image of this video camera 4 Real-time Obtaining current environment, processor 3 can be central processing unit (CPU) or any digital signal processor (DSP) with processing power etc., car light detection is carried out to this image, if car light candidate detected, and (prediction) can be judged or estimate as car light, then mark this car light item, and point out to driver in the mode such as shown on vehicle window or independent display.Wherein, this video camera 4 at least comprise a common camera and one the polarisation camera of special filter has been installed on a sensor; be used for respectively obtaining normal gray level image and polarisation gray level image, the common camera of this video camera 4 and polarisation camera in real time, synchronously obtain normal gray level image and the polarisation gray level image of vehicle front Same Scene.
Fig. 2 illustrates the overview flow chart of the vehicle lamp detection method according to the embodiment of the present invention.As shown in Figure 2, can comprise according to the vehicle lamp detection method of the embodiment of the present invention: obtaining step S100, can synchronously obtain each polarisation gray level image and normal gray level image in interval on schedule, wherein, the polarisation gray level image simultaneously obtained is corresponding with normal gray level image; Fusion steps S200, can be fused to each corresponding polarisation gray level image and normal gray level image each fused images highlighting light; Car light candidate extraction step S300, can respectively for each fused images, by extracting car light candidate item based on the connected domain analysis of brightness; Car light candidate association step S400, can respectively for continuous print two fused images, be associated between the car light item of last fused images and the car light candidate item of car light candidate item and a rear fused images, the car light candidate item be associated with the car light item of last fused images or car light candidate item in a rear fused images is labeled as car light item; Export step S500, can respectively for continuous print two fused images, the car light item exporting a rear fused images and mark.
At obtaining step S100, carry out the fusion of polarisation gray level image and normal (common, conventional in other words) gray level image.Light takes on a different character factor, such as amplitude, wavelength and direction of vibration etc.Therefore, such as, by the special filter installed on a sensor, polarisation filter, the light of certain specific band can be filtered out.In order to the enforcement of the embodiment of the present invention, the polarisation filter of the light filtering out automobile tail light light place wave band can be selected.Due to the effect of polarisation filter, reflecting plate light closely similar with automobile tail light in normal gray level image, can distinguish well in polarisation gray level image.Compare with normal gray level image, in polarisation gray level image, compared to other light sources, the light of automobile tail light becomes darker.Above-mentioned character can be effectively applied to image co-registration after this.In addition; it will be appreciated by those skilled in the art that; obtaining step S100 on schedule interval synchronously implements continuously, a pair corresponding polarisation gray level image of each acquisition and normal gray level image, and fusion is after this carried out between paired polarisation gray level image and normal gray level image.Any process of the embodiment of the present invention, unless otherwise defined, is all similarly implemented on image that each moment obtains or it merges the fused images that obtains.
Fig. 3 illustrates the process flow diagram of the fusion steps S200 according to the embodiment of the present invention.
Described fusion steps S200 performs for each corresponding polarisation gray level image and normal gray level image respectively, for any pair polarisation gray level image and normal gray level image, performed described fusion steps S200 can comprise: difference image calculation procedure S210, corresponding polarisation gray level image can be deducted with normal gray level image, obtain difference image; First image binaryzation step S220, based on the first predetermined luminance threshold, can turn to the first binary image by this difference image two-value; Imaging importing step S230, can superpose this first binary image with this normal gray level image, obtains superimposed image; Image smoothing step S240, can pass through smoothing processing superimposed image, obtains fused images.
Fig. 4 comprises Fig. 4 A to 4F, shows the exemplifying embodiment of described fusion steps S200.Wherein, Fig. 4 A shows an example of polarisation gray level image, and Fig. 4 B shows an example of the normal gray level image corresponding with this polarisation gray level image.
At difference image calculation procedure S210, the difference image of polarized light image and gray level image such as can be calculated by following formula (1).
Image diff=Image mono-Image spectroscopic(1)
Wherein, Image monorepresent normal gray level image, Image spectroscopicrepresent polarisation gray level image, Image diffrepresent and subtract each other the difference image obtained.Normal gray level image and polarisation gray level image are mutually corresponding, and therefore difference can by by pixel ground computing, namely opposite position place grey scale pixel value subtract each other obtain.Fig. 4 C illustrates that shown in corresponding Fig. 4 B, shown in normal gray level image and Fig. 4 A, polarisation gray level image subtracts each other the example of the difference image of gained.
Through polarisation filter, taillight becomes darker, and the light of reflecting plate almost remains unchanged.Therefore, at the first image binaryzation step S220, by binaryzation difference image, the change of taillight can be amplified to remain, reflecting plate reflective, can be taken as noise light and remove.
Such as, can by the binarization of following formula (2) enforcement to difference image.
I mage bin ( i , j ) = 255 Image diff ( i , j ) > T brightness 0 others - - - ( 2 )
Wherein, Image diff(i, j) represents the gray-scale value of pixel (i, j) in difference image, T brightnessthe threshold value for carrying out binaryzation to difference image, in order to the threshold value in aftertreatment therewith and binaryzation distinguish, this threshold value T brightnessbe called the first luminance threshold, (image) binaryzation of this step is called first (image) binaryzation, Image bin(i, j) represents the gray-scale value of relevant position pixel (i, j) of the first binary image.If the gray-scale value of certain pixel is greater than this first luminance threshold, then in the first binary image, the value of this pixel is 255; Otherwise then in the first binary image, the value of this pixel is 0.Fig. 4 D illustrates the example of the first binary image difference image shown in Fig. 4 C being carried out to binary conversion treatment gained.
First luminance threshold can be the empirical value obtained according to sample image, can be any experience brightness value usual for taillight brightness and above pixel can distinguished mutually with the pixel lower than the usual brightness of taillight, thus in the first binary image, taillight be highlighted.
In addition, although it will be understood by those skilled in the art that the pixel being greater than this first luminance threshold in formula (2) is endowed brightness 255, also the pixel being more than or equal to this first luminance threshold can be given other high luminance values, such as 250,200 etc.
After above-mentioned first binaryzation, in the first binary image, only taillight is retained, and is superposed with corresponding normal gray level image by this first binary image, and then can retain brighter auto bulb.
At imaging importing step S230, obtain superimposed image by following formula (3).
Image fusion=Image bin+Image mono(3)
Wherein, Image binrepresent the first binary image, Image monorepresenting corresponding aforementioned normal gray level image, by the grey scale pixel value at opposite position place being added by pixel, obtaining superimposed image Image fusion.Fig. 4 E illustrates the superimposed image of the superimposed gained of normal gray level image shown in the first binary image shown in Fig. 4 D and Fig. 4 B.
The superimposed image as shown in Figure 4 E obtained by imaging importing step S230 can as the fused images of corresponding normal gray level image and polarisation gray level image; but; at this; also can further by image smoothing step S240; such as adopt mean filter operation, remove some noises still existed in superimposed image.It will be understood by those skilled in the art that image smoothing step S240 also can adopt the smooth operation process of alternate manner to remove noise, and be not limited to mean filter.Fig. 4 F illustrates the fused images after smothing filtering to the smoothing operation gained of superimposed image as shown in Figure 4 E.
By the process of above-mentioned image co-registration, corresponding normal gray level image and polarisation gray level image are carried out merge the reflective noise effect caused effectively can removing reflecting plate.Image after the fusion obtained may be used for follow-up car light check processing.
Fig. 5 illustrates the process flow diagram of the car light candidate extraction step S300 according to the embodiment of the present invention.Fusion steps S200 before being similar to, car light candidate extraction step S300 performs for each fused images respectively.For any one fused images, performed car light candidate extraction step S300 can comprise: the second image binaryzation step S310, based on the second predetermined luminance threshold, this fused images two-value can be turned to the second binary image; Connected domain filtration step S320, can obtain the connected domain in this second binary image, retains the connected domain of size in preset range, alternatively connected domain; Connected domain coupling step S330, based on predefined size and location rule, can mating the candidate's connected domain in this second binary image, is car light candidate item being matched to right candidate's connected component labeling.
At the second image binaryzation step S310, based on luminance threshold, binary conversion treatment is carried out to fused images, in order to distinguish with the first previous image binaryzation process, (image) binaryzation is in this step called second (image) binaryzation, the threshold value T ' adopted brightnessbe called the second luminance threshold.
At the second image binaryzation step S310, binaryzation can be carried out by following formula (4) to fused images.
I bin ( i , j ) = 255 I fusion ( i , j ) > T brightness ′ 0 others - - - ( 4 )
Wherein, I fusion(i, j) represents the gray-scale value of pixel (i, j) in fused images, T ' brightnessthe second luminance threshold for carrying out binaryzation to fused images, I bin(i, j) represents the gray-scale value of relevant position pixel (i, j) of the second binary image.If the gray-scale value of certain pixel is greater than this second luminance threshold, then in the second binary image, the value of this pixel is 255; Otherwise then in the second binary image, the value of this pixel is 0.Thus, by this second image binaryzation step S310, the region that in fused images, brightness is darker can be removed.
Second luminance threshold can be the empirical value obtained according to sample image, can be any experience brightness value usual for car light brightness and above pixel can distinguished mutually with the pixel lower than the usual brightness of car light, thus in the second binary image, car light be highlighted.
In addition, although it will be understood by those skilled in the art that the pixel being greater than this second luminance threshold in formula (4) is endowed brightness 255, also the pixel being more than or equal to this second luminance threshold can be given other high luminance values, such as 250,200 etc.
Then, at connected domain filtration step S320, can carry out connected domain analysis, and get its boundary rectangle by any one existing connected domain analysis means, the following stated connected domain all refers to its boundary rectangle except special instruction.Too small, excessive, long in shape region, owing to can not be car light, therefore, in this step, can be judged by size, retains the connected domain of size in preset range, alternatively connected domain.The range of size of vehicle lamp area can be obtained by sample analysis, and namely the connected domain boundary rectangle not meeting this range of size is removed.Size judge can take in following standard any one or its combination:
1) width and/or Height Standard
If connected domain boundary rectangle is wide, and/or excessive height, be then removed;
2) length breadth ratio standard
If connected domain boundary rectangle length breadth ratio not in certain limit, is then removed;
3) area standard
If connected domain boundary rectangle area is excessive or too small, be then removed.
Then, at connected domain coupling step S330, the process through connected domain filtration step S320 is not removed and candidate's connected domain of retaining is mated.Fig. 6 comprises Fig. 6 A and Fig. 6 B, shows connected domain coupling step S330 carries out matching treatment effect signal to connected domain.Fig. 6 A illustrates the second binary image example of not carrying out connected domain coupling, comprising 3 rectangle frames, represents 3 candidate's connected domains to be matched.
Connected domain coupling step S330 can take following predefined size and location rule, if two candidate's connected domains meet following all rules, then thinks that the candidate's connected domain in the second binary image matches.
1) close to each other in two candidate's connected domain levels
If two candidate's connected domain central point distance hDis in the horizontal direction separately meet following formula (5), formula (6) simultaneously, then think close to each other in two candidate's connected domain levels.
hDis<p*max(blob1.width,blob2.width)(5)
hDis<p*max(blob1.height,blob2.height)(6)
Wherein, blob1.width and blob1.height represents width and the height of one of them candidate's connected domain respectively, blob2.width and blob2.height represents width and the height of wherein another candidate's connected domain respectively, hDis is the distance between two candidate's connected domains, p is adjustable constant, such as p=14, and unit is pixel, according to the usual spacing range of a pair car light of a vehicle in sample image, empirically determine.
2) two candidate's connected domain levels keep at a certain distance away
If two candidate's connected domain central point distance hDis in the horizontal direction separately meet following formula (7), then think and two candidate's connected domain levels keep at a certain distance away.
hDis>q(7)
Wherein, q is adjustable constant, such as q=10, and unit is pixel, according to the usual spacing range of a pair car light of a vehicle in sample image, empirically determines.
3) two candidate's connected domains have overlap to a certain degree in vertical direction
If two candidate's connected domain overlapping degree vOverlapRatio in vertical direction meet following company (8), then think that two candidate's connected domains have overlap to a certain degree in vertical direction.
vOverlapRatio>r(8)
Wherein, vOverlapRatio represents the ratio of the equitant height of the vertical edges of two candidate's connected domains relative to the distance farthest of the horizontal sides of two candidate's connected domains, r is adjustable constant, such as r=0.3, according to degree overlapping in the usual vertical direction of a pair car light of a vehicle in sample image, empirically can determine.
4) two candidate's connected domain height are close
If the height of two candidate's connected domains meets following formula (9), then think that two candidate's connected domain height are close.
min(blob1.height,blob1.height))/max(blob1.height,blob1.height)<u(9)
Wherein, blob1.height and blob2.height represents the height of two candidate's connected domains respectively, and u is adjustable constant, such as u=0.5, according to the usual height degree of closeness of a pair car light of a vehicle in sample image, empirically determines.
5) two candidate's connected domain similar width
If the width of two candidate's connected domains meets following formula (10), then think two candidate's connected domain similar width.
min(blob1.width,blob1.width))/max(blob1.width,blob2.width)<v(10)
Wherein, blob1.width and blob1.width represents the width of two candidate's connected domains respectively, and v is adjustable constant, such as v=0.4, according to the bandpass degree of closeness of a pair car light of a vehicle in sample image, empirically determines.
Except above 5 conditions, can also optionally set other condition, such as, require that candidate's connected domain is not positioned at image boundary place, but range image border certain distance etc.
Fig. 6 B shows the result of being mated candidate's connected domain by above-mentioned connected domain coupling step S330.As shown in Figure 6B, in Fig. 6 A, the candidate's connected domain shown in two rectangle frames of top matches, remain in fig. 6b, as the car light candidate item that car light candidate extraction step S300 extracts in this fused images, and the candidate's connected domain shown in rectangle frame of below does not have other candidate's connected domain to match in Fig. 6 A, no longer retain, thus no longer identify in fig. 6b.
In order to car light candidate association step S400 and process after this thereof, right connected domain can be matched to for what extract at car light candidate extraction step S300, get the boundary rectangle of pairing connected domain again, thus any one car light candidate item extracted is all for a boundary rectangle, hereinafter mentioned car light item and car light candidate item, in the description of its relevant positions of touching upon, size, movement etc., all refer to its boundary rectangle.
In the processing procedure of above-described fusion steps S200 and car light candidate extraction step S300, each (frame) fused images in sequence is all separate, namely, the process of the fusion steps S200 that accepts of each fused images and car light candidate extraction step S300 all has nothing to do with other frame.In the processing procedure of such as car light candidate association step S400 after this, utilize inter-frame relation, namely, when processing a fused images, utilize and carry out for other fused images the result that processes, or process is carried out between two fused images.
Car light candidate association step S400 carries out for former and later two fused images continuous in fused images sequence, therefore each process is for last fused images and a rear fused images, it will be appreciated by those skilled in the art that, " a rear fused images " in " last fused images " namely front single treatment in each process, and " the last fused images " in " a rear fused images " at every turn processing namely process next time.Namely, in each car light candidate association step S400 performs, " last fused images " experienced by a car light candidate association step S400, and " a rear fused images " not yet lives through car light candidate association step S400, car light candidate association step S400 so in turn carries out former and later two fused images continuously for each in fused images sequence.
In last fused images, there is car light item, comprise the car light item determined and hereinafter by " the car light item " that dope (inferring in other words) that describe (not by the car light item that the actual car light candidate item detected obtains, but can be considered as there is car light item in any case, hereinafter describe).In last fused images, also may exist " the car light candidate item " that be not yet defined as " car light item ".And in a rear fused images, only there is the car light candidate item extracted by car light candidate extraction step S300.
For the car light item in last fused images and car light candidate item, attempt to be associated with all car light candidate item of a rear fused images successively, thus any one the car light item realized in last fused images and car light candidate item all attempt to set up to associate with any one car light candidate item of a rear fused images, namely traveling through all associations may.Can be specifically, calculate the central point distance of any one car light candidate item of any one car light item in last fused images or car light candidate item and a rear fused images, get wherein the shortest distance, if this shortest distance is less than a predetermined distance threshold, then can think that the two is associated.This predetermined distance threshold by the analysis to sample image, empirically can set.
Car light candidate association step S400 can be associated front and back frame fused images by means described above, and the car light candidate item be associated with the car light item of last fused images or car light candidate item in a rear fused images is labeled as car light item.
Then, at output step S500, can respectively for continuous print two fused images, the car light item exporting a rear fused images and mark.The mode exported can be display, such as on front window or another independent screen, show this rear fused images, and the position display of car light item goes out this car light item in this rear fused images, such as outstanding highlighting shows its boundary rectangle, to point out the existence of front or opposite vehicle to driver.So far, the vehicle lamp detection method of the embodiment of the present invention can realize its basic function.
The association of car light item or car light candidate item is carried out between the frame of front and back, except the car light item of above-mentioned last fused images or car light candidate item are associated with except the situation of car light candidate item in a rear fused images, also may occur following fortuitous event: such as, certain car light item of last fused images or car light candidate item are not associated with car light candidate item in a rear fused images; Have in a rear fused images car light candidate item do not have in last fused images car light item or car light candidate item associated.For above-mentioned fortuitous event, alternatively, can improve further on the basis of aforesaid vehicle lamp detection method.
In a rear fused images, the situation of car light candidate item is not associated with for certain car light item of aforesaid last fused images or car light candidate item, such as, after described car light candidate association step S400, also prediction steps is comprised before described output step S500, if the car light Xiang Wei of last fused images is associated with car light candidate item in a rear fused images in described car light candidate association step, then can according to the position of this car light item of last fused images, size, and the previous change in location of this car light item, dope the position in this car light Xiang Hou mono-fused images, this car light item that this position mark in a rear fused images goes out this last fused images of car light Xiang Yiyu is associated.About last fused images, when existence does not obtain the car light candidate item of coupling, this prediction steps can only process for the car light item not obtaining coupling, also car light item and car light candidate item to not obtaining coupling can be selected all to process, in the former case, the car light candidate item not obtaining coupling is abandoned naturally.
Prediction described herein based on the hypothesis of " vehicle uniform rectilinear travels within the very short time interval ", such as, can adopt simple and quick Kalman filter tracking method.
When this prediction steps only processes for the car light item not obtaining coupling, position coordinates (the x at the center of each car light item in this last fused images of car light candidate association step S400 process, y) known, h is also known for its width w, height.In addition, as previously mentioned, last fused images in car light candidate association step S400 process all lives through the process of car light candidate association as " a rear fused images ", and the car light item therefore in last fused images (such as the n-th frame) is also known relative to the displacement of the car light item matched with it in the fused images ((n-1) frame) of its former frame.Such as, this displacement can be the position (x of car light item central point in the n-th frame, y) relative to the displacement (dx, dy) of car light item (or car light candidate item) central point associated therewith in (n-1) frame, accordingly, state vector vs (t)=(x of the car light item in this last fused images (the n-th frame) can be built, y, dx, dy, w, h) t, wherein T represents transposition, and t represents the time parameter of the state vector of car light item.
Build a state-transition matrix A again, be expressed as:
A = 1 0 Δt 0 0 0 0 1 0 Δt 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1
Wherein, Δ t represents the interval time of front and back two frame fused images, the predetermined time interval namely adopted in aforementioned obtaining step S100.
Thus, the state vector vs (t+ Δ t) of car light item in this rear fused images ((n+1) frame) in this last fused images can be doped by following formula (11).
vs(t+Δt)=A*vs(t)(11)
Obtain state vector vs (t+ Δ t), can be expressed as (x ', y ', dx ', dy ', w ', h ') T, (t+ Δ t) represents the time parameter of the state vector of the car light item doped, and wherein comprises the information of the position, size etc. that predicted car light item is predicted in this rear fused images.The car light item predicted is physical presence in this rear fused images not, but, in order to avoid undetected the caused adverse effect of car light, a car light item of drawing up out in a rear fused images.The car light item predicted still can regard as actual car light item and process, such as normally identify and output display, carry out associating etc. with the car light candidate item in a frame ((n+2) frame) fused images thereafter, and when not obtaining association, the car light item predicted still can be further used for predicting the car light item in a frame ((n+2) frame) fused images thereafter.
After doping car light item, alternatively, can be checked it by geometric properties.Such as, if the width of the car light item predicted is greater than certain empirical value, and/or be highly greater than certain empirical value, then remove.In addition, length breadth ratio standard can also be adopted, if the length breadth ratio of the car light item namely predicted is not in certain limit, then remove.
Prediction steps is intended to avoid undetected, but if a car light does not exist really in reality scene, still so prediction may cause flase drop.Therefore, on the basis of prediction process, the vehicle lamp detection method of the embodiment of the present invention can further include the first delete step, if in a fused images, the number of times that car light item obtains continually by prediction reaches the first pre-determined number threshold value, then can delete this car light item.
After this first delete step can be placed in this vehicle lamp detection method car light candidate extraction step S300, the any position exported before step S500 performs, can before car light candidate association step S400, perform afterwards, or can perform concurrently mutually with it, as long as find certain car light item in current handled fused images continually by prediction but not actual detection and the number of times that obtains reaches the first pre-determined number threshold value, then can think that this car light item does not exist, should abandon.This first pre-determined number threshold value by the analysis to sample image, can be determined according to practical experience.
Blinking light is a kind of form of noise light for car light detects, and only appears among a frame fused images, if by detecting the car light candidate item also display translation obtained from such blinking light, then cause visual interference to driver.Such flicker noise be usually expressed as aforesaid " have in a rear fused images car light candidate item do not have in last fused images car light item or car light candidate item associated " situation.
Therefore, another kind as the vehicle lamp detection method to the embodiment of the present invention improves, can limit further, in described car light candidate association step S400, if have car light candidate item not have car light item or car light candidate item to be associated in last fused images in a rear fused images, then in a rear fused images, this car light candidate item is labeled as car light candidate item.Such car light candidate item can think that " first " occurs, not display translation car light candidate item is can be designed as owing to exporting step S500, therefore such " light " that detects first is by not display translation in output step S500 subsequently, and execution next time car light candidate association step S400 will be arrived, when being associated with car light candidate item as the car light candidate item in " last fused images " in fused images thereafter, show with the car light item in fused images thereafter, thus the false-alarm avoiding blinking light to cause.
In addition, rule of thumb, to be positioned at below car light and the light less than this car light width is not another car light, being positioned at light above car light and larger than this car light width neither another car light.If certain car light item occurs often continuously, then represent that its degree of confidence is high, stable testing result can be considered as.Therefore, another kind as the vehicle lamp detection method to the embodiment of the present invention improves, may further include the second delete step, if in a fused images, a car light item has reached the second pre-determined number threshold value in previous fused images with the number of times of association and prediction mode continued presence, then in this fused images, delete and to appear at below this car light item and the car light item less than this car light item width or car light candidate item and to appear at above this car light item and the car light item larger than this car light item width or car light candidate item.Wherein, so-called " continuously " exists, can refer to that this car light item forward-backward correlation or prediction become continual chain in the multiple fused images of continuous print, when the number of times of this continued presence reaches certain second pre-determined number threshold value, represent in this fused images when pre-treatment, this car light item can be thought stable, can be called " stablizing car light item ".This second pre-determined number threshold value by the analysis to sample image, can be determined according to practical experience.Car light candidate item detected in certain fused images, or for car light item wherein, can directly judge whether it should be dropped according to the relation of the stable car light item in itself and certain limit.
Fig. 7 comprises Fig. 7 A and Fig. 7 B, and the example of being deleted car light candidate item by the second delete step is shown respectively.In fig. 7, rectangle frame Q1 illustrates that one stablizes car light item, and the car light item shown in rectangle frame Q2 narrower than this stable car light item below it or car light candidate item should be deleted; In figure 7b, rectangle frame Q3 illustrates that one stablizes car light item, and the car light item shown in rectangle frame Q4 wider than this stable car light item above it or car light candidate item should be deleted.
In addition, the static light that some is similar to car light may be there is in reality, such as buildings light etc., when so static light is detected as car light candidate item (can be called " static item "), noise should be taken as and abandons.
Therefore, another kind as the vehicle lamp detection method to the embodiment of the present invention improves, may further include the 3rd delete step, if in a fused images, a car light item or car light candidate item from the previous fusion image of predetermined space before this fused images to associate and prediction mode continued presence, and if this car light item or the width of car light candidate item in this fused images are according to width in this previous fusion image of this car light item or car light candidate item and outside the width range in this fused images predicted, then delete this car light item or car light candidate item.
In a fused images (being assumed to be (n+m) frame) when pre-treatment, suppose have a car light item or car light candidate item from the n-th frame fused images of previous predetermined space m, by the any-mode associating or predict, continued presence is to present frame, then can pass through the width of car light item in previous n-th frame fused images, according to following formula (12) dope its in current (n+m) frame fused images should width.
W n + m = W n * d + t * s d - - - ( 12 )
Wherein, W nthe width of this car light (candidate) item in previous n-th frame fused images, W n+mit is predicted its width in (n+m) frame fused images, d represents this car and the distance of this car light item place vehicle when the n-th frame of carrying out processing, t represents the time interval between the n-th frame and (n+m) frame, and s represents the relative velocity of this car and this car light item place vehicle.With (W n+m-T, W n+m+ T) as the width range predicted in current (n+m) frame fused images.
Then, the developed width W ' of this car light (candidate) item in (n+m) frame fused images of pre-treatment is judged n+mwhether within this width range predicted.If W ' n+m∈ (W n+m-T, W n+m+ T), then this car light (candidate) item continues as " vehicle " and retains, otherwise deletes owing to may be static item.
After this second delete step and the 3rd delete step can be placed in this vehicle lamp detection method car light candidate extraction step S300, the any position exported before step S500 performs, can before car light candidate association step S400, perform afterwards, or can perform concurrently mutually with it, judge for certain car light item or car light candidate item in current handled fused images, determine whether it should be dropped.
Second delete step and the 3rd delete step also can perform after car light candidate association step S400, before output step S500.Namely, in described car light candidate association step S400, if have car light candidate item not have car light item or car light candidate item to be associated in last fused images in a rear fused images, then, in a rear fused images, this car light candidate item is labeled as car light candidate item.Then, perform the second delete step, if in a rear fused images, a car light item has reached the second pre-determined number threshold value in previous fused images with the number of times of association and prediction mode continued presence, then in this rear fused images, delete and to appear at below this car light item and the car light item less than this car light item width or car light candidate item and to appear at above this car light item and the car light item larger than this car light item width or car light candidate item.Then, perform the 3rd delete step, if in a rear fused images, car light item from the previous fusion image of predetermined space before this fused images to associate and prediction mode continued presence, and if the width of this car light item in this fused images outside the width range in this fused images predicted, is then deleting this car light item according to the width of this car light item in this previous fusion image.Before output step S500, carry out the judgement of above 3 steps in order, to avoid false-alarm.
The present invention can also be embodied as a kind of car light detection device, for implementing aforesaid vehicle lamp detection method.
Fig. 8 illustrates the general frame of the car light detection device according to the embodiment of the present invention.As shown in Figure 8, the car light detection device of the embodiment of the present invention comprises: acquisition device 100, aforesaid obtaining step S100 can be implemented, each polarisation gray level image and normal gray level image is synchronously obtained in order to interval on schedule, wherein, the polarisation gray level image simultaneously obtained is corresponding with normal gray level image; Fusing device 200, can implement aforesaid fusion steps S200, in order to each corresponding polarisation gray level image and normal gray level image to be fused to each fused images highlighting light; Car light candidate extraction means 300, can implement aforesaid car light candidate extraction step S300, in order to respectively for each fused images, by extracting car light candidate item based on the connected domain analysis of brightness; Car light candidate association device 400, aforesaid car light candidate association step S400 can be implemented, in order to respectively for continuous print two fused images, be associated between the car light item of last fused images and the car light candidate item of car light candidate item and a rear fused images, the car light candidate item be associated with the car light item of last fused images or car light candidate item in a rear fused images is labeled as car light item; Output unit 500, can implement aforesaid output step S500, in order to respectively for continuous print two fused images, and a fused images and the car light item that marks thereof after exporting.
Can also after described car light candidate association device 400 according to the car light detection device of the embodiment of the present invention, prediction unit is comprised before described output unit 500, aforesaid prediction steps can be performed, if car light candidate association device 400 is not associated with car light candidate item in car light Xiang Hou mono-fused images of last fused images in aforementioned association process, then prediction unit can according to the position of this car light item of last fused images, size, and the previous change in location of this car light item, dope the position in this car light Xiang Hou mono-fused images, this car light item that this position mark in a rear fused images goes out this last fused images of car light Xiang Yiyu is associated.
And, if have car light candidate item not have car light item or car light candidate item to be associated in last fused images in a fused images after in the aforementioned association process of described car light candidate association device 400, then described car light candidate association device 400 this car light candidate item in a rear fused images is labeled as car light candidate item.
Described fusing device 300 performs fusion treatment for each corresponding polarisation gray level image and normal gray level image respectively, described fusing device 300 comprises: difference image calculation element, aforesaid difference image calculation procedure S210 can be performed, deduct corresponding polarisation gray level image with normal gray level image, obtain difference image; First image binaryzation device, can perform aforesaid first image binaryzation step S220, in order to based on the first predetermined luminance threshold, this difference image two-value is turned to the first binary image; Imaging importing device, can perform aforesaid imaging importing step S230, in order to be superposed with this normal gray level image by this first binary image, obtains superimposed image; Image smoothing device, can perform aforesaid image smoothing step S240, in order to by smoothing processing superimposed image, obtains fused images.
Described car light candidate extraction means 400 performs car light candidate item extraction process for each fused images respectively, described car light candidate extraction means 400 comprises: the second image binaryzation device, aforesaid second image binaryzation step S310 can be performed, in order to based on the second predetermined luminance threshold, this fused images two-value is turned to the second binary image; Connected domain filtration unit, can perform aforesaid connected domain filtration step S320, in order to obtain the connected domain in this second binary image, retains the connected domain of size in preset range, alternatively connected domain; Connected domain coalignment, aforesaid connected domain coupling step S330 can be performed, in order to based on predefined size and location rule, mating the candidate's connected domain in this second binary image, is car light candidate item being matched to right candidate's connected component labeling.
The first delete device is can further include according to the car light detection device of the embodiment of the present invention, aforesaid first delete step can be performed, if in a fused images, the number of times that car light item obtains continually by prediction reaches the first pre-determined number threshold value, then delete this car light item.
The second delete device is can further include according to the car light detection device of the embodiment of the present invention, aforesaid second delete step can be performed, if in a fused images, a car light item has reached the second pre-determined number threshold value in previous fused images with the number of times of association and prediction mode continued presence, then in this fused images, delete and to appear at below this car light item and the car light item less than this car light item width or car light candidate item and to appear at above this car light item and the car light item larger than this car light item width or car light candidate item.
The 3rd delete device is can further include according to the car light detection device of the embodiment of the present invention, aforesaid step can be performed, if in a fused images, a car light item or car light candidate item from the previous fusion image of predetermined space before this fused images to associate and prediction mode continued presence, and if this car light item or the width of car light candidate item in this fused images are according to width in this previous fusion image of this car light item or car light candidate item and outside the width range in this fused images predicted, then delete this car light item or car light candidate item.
Above-mentioned first delete device, the second delete device and the 3rd delete device can be connected to any position in the car light detection device of the embodiment of the present invention after car light candidate extraction means 300, before output unit 500, can before car light candidate extraction means 400, walk abreast afterwards or with it.
Or, also can after car light candidate association device 400, before described output unit 500 according to the car light detection device of the embodiment of the present invention, be linked in sequence the second delete device and the 3rd delete device.Namely, first perform aforesaid second delete step by the second delete device, if in a rear fused images, a car light item has reached the second pre-determined number threshold value in previous fused images with the number of times of association and prediction mode continued presence, then in this rear fused images, delete and to appear at below this car light item and the car light item less than this car light item width or car light candidate item and to appear at above this car light item and the car light item larger than this car light item width or car light candidate item; Then, 3rd delete device performs aforesaid 3rd delete step, if in a rear fused images, car light item from the previous fusion image of predetermined space before this fused images to associate and prediction mode continued presence, and if the width of this car light item in this fused images outside the width range in this fused images predicted, is then deleting this car light item according to the width of this car light item in this previous fusion image.
A kind of technology utilizing polarized light image to carry out vehicle detection at night is proposed according to the vehicle lamp detection method of the embodiment of the present invention and car light detection device.First, synchronous polarisation gray level image and normal gray level image are merged, in the difference image of two width images, amplify the change of taillight, to remove the impact of the light of reflecting plate.Then, utilize predefined geometrical rule to carry out vehicle detection in the picture, thus improve verification and measurement ratio.Finally, utilize the inter-frame information between consecutive image, above-mentioned testing result is followed the tracks of and verified, to remove the impact of noise light, such as, can comprise the candidate item and static candidate item etc. of blinking light, low confidence, reduce false drop rate.
The sequence of operations illustrated in the description can be performed by the combination of hardware, software or hardware and software.When being performed this sequence of operations by software, computer program wherein can be installed in the storer be built in the computing machine of specialized hardware, make computing machine perform this computer program.Or, computer program can be installed in the multi-purpose computer that can perform various types of process, make computing machine perform this computer program.
Such as, computer program can be prestored in the hard disk or ROM (ROM (read-only memory)) of recording medium.Or, (record) computer program can be stored in removable recording medium, such as floppy disk, CD-ROM (compact disc read-only memory), MO (magneto-optic) dish, DVD (digital versatile disc), disk or semiconductor memory temporarily or for good and all.So removable recording medium can be provided as canned software.
The present invention has been described in detail with reference to specific embodiment.But clearly, when not deviating from spirit of the present invention, those skilled in the art can perform change to embodiment and replace.In other words, the form that the present invention illustrates is open, instead of explains with being limited.Judge main idea of the present invention, appended claim should be considered.

Claims (10)

1. a vehicle lamp detection method, comprising:
Obtaining step, interval synchronously obtains each polarisation gray level image and normal gray level image on schedule, and wherein, the polarisation gray level image simultaneously obtained is corresponding with normal gray level image;
Fusion steps, the polarisation gray level image corresponding by each and normal gray level image are fused to each fused images highlighting light;
Car light candidate extraction step, respectively for each fused images, by extracting car light candidate item based on the connected domain analysis of brightness;
Car light candidate association step, respectively for continuous print two fused images, be associated between the car light item of last fused images and the car light candidate item of car light candidate item and a rear fused images, the car light candidate item be associated with the car light item of last fused images or car light candidate item in a rear fused images is labeled as car light item;
Export step, respectively for continuous print two fused images, the car light item exporting a rear fused images and mark.
2. according to vehicle lamp detection method according to claim 1, after described car light candidate association step, also prediction steps is comprised before described output step, if the car light Xiang Wei of last fused images is associated with car light candidate item in a rear fused images in described car light candidate association step, then according to the position of this car light item of last fused images, size, and the previous change in location of this car light item, dope the position in this car light Xiang Hou mono-fused images, this car light item that this position mark in a rear fused images goes out this last fused images of car light Xiang Yiyu is associated.
3. according to vehicle lamp detection method according to claim 1, wherein, in described car light candidate association step, if have car light candidate item not have car light item or car light candidate item to be associated in last fused images in a rear fused images, then in a rear fused images, this car light candidate item is labeled as car light candidate item.
4. according to the vehicle lamp detection method in claim 1-3 described in any one, wherein, described fusion steps performs for each corresponding polarisation gray level image and normal gray level image respectively:
Difference image calculation procedure, deducts corresponding polarisation gray level image with normal gray level image, obtains difference image;
First image binaryzation step, based on the first predetermined luminance threshold, turns to the first binary image by this difference image two-value;
Imaging importing step, superposes this first binary image with this normal gray level image, obtains superimposed image;
Image smoothing step, by smoothing processing superimposed image, obtains fused images.
5. according to the vehicle lamp detection method in claim 1-3 described in any one, wherein, described car light candidate extraction step performs for each fused images respectively:
Second image binaryzation step, based on the second predetermined luminance threshold, turns to the second binary image by this fused images two-value;
Connected domain filtration step, obtains the connected domain in this second binary image, retains the connected domain of size in preset range, alternatively connected domain;
Connected domain coupling step, based on predefined size and location rule, mating the candidate's connected domain in this second binary image, is car light candidate item being matched to right candidate's connected component labeling.
6. according to vehicle lamp detection method according to claim 2, also comprise the first delete step, if in a fused images, the number of times that car light item obtains continually by prediction reaches the first pre-determined number threshold value, then delete this car light item.
7. according to the vehicle lamp detection method in claim 1-2 described in any one, also comprise the second delete step, if in a fused images, a car light item has reached the second pre-determined number threshold value in previous fused images with the number of times of association and prediction mode continued presence, then in this fused images, delete and to appear at below this car light item and the car light item less than this car light item width or car light candidate item and to appear at above this car light item and the car light item larger than this car light item width or car light candidate item.
8. according to the vehicle lamp detection method in claim 1-2 described in any one, also comprise the 3rd delete step, if in a fused images, a car light item or car light candidate item from the previous fusion image of predetermined space before this fused images to associate and prediction mode continued presence, and if this car light item or the width of car light candidate item in this fused images are outside preset width scope, then delete this car light item or car light candidate item
Wherein, described preset width scope according to this car light item or car light candidate item the width in this previous fusion image predict.
9., according to vehicle lamp detection method according to claim 3, after described car light candidate association step, also comprise before described output step:
Second delete step, if in a rear fused images, a car light item has reached the second pre-determined number threshold value in previous fused images with the number of times of association and prediction mode continued presence, then in this rear fused images, delete and to appear at below this car light item and the car light item less than this car light item width or car light candidate item and to appear at above this car light item and the car light item larger than this car light item width or car light candidate item;
3rd delete step, if in a rear fused images, car light item from the previous fusion image of predetermined space before this fused images to associate and prediction mode continued presence, and if the width of this car light item in this fused images is outside preset width scope, then delete this car light item
Wherein, described preset width scope is predicted according to the width of this car light item in this previous fusion image.
10. a car light detection device, comprising:
Acquisition device, interval synchronously obtains each polarisation gray level image and normal gray level image on schedule, and wherein, the polarisation gray level image simultaneously obtained is corresponding with normal gray level image;
Fusing device, the polarisation gray level image corresponding by each and normal gray level image are fused to each fused images highlighting light;
Car light candidate extraction means, respectively for each fused images, by extracting car light candidate item based on the connected domain analysis of brightness;
Car light candidate association device, respectively for continuous print two fused images, be associated between the car light item of last fused images and the car light candidate item of car light candidate item and a rear fused images, the car light candidate item be associated with the car light item of last fused images or car light candidate item in a rear fused images is labeled as car light item;
Output unit, respectively for continuous print two fused images, the car light item exporting a rear fused images and mark.
CN201110409405.7A 2011-12-09 2011-12-09 Vehicle lamp detection method and car light detection device Active CN103164685B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110409405.7A CN103164685B (en) 2011-12-09 2011-12-09 Vehicle lamp detection method and car light detection device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110409405.7A CN103164685B (en) 2011-12-09 2011-12-09 Vehicle lamp detection method and car light detection device

Publications (2)

Publication Number Publication Date
CN103164685A CN103164685A (en) 2013-06-19
CN103164685B true CN103164685B (en) 2016-04-27

Family

ID=48587757

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110409405.7A Active CN103164685B (en) 2011-12-09 2011-12-09 Vehicle lamp detection method and car light detection device

Country Status (1)

Country Link
CN (1) CN103164685B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680172B (en) * 2013-11-28 2018-11-16 深圳市朗驰欣创科技有限公司 A kind of night automobile video frequency grasp shoot method and system
CN103870809B (en) * 2014-02-27 2017-06-16 奇瑞汽车股份有限公司 The detection method and device of vehicle
CN103914701B (en) * 2014-03-20 2017-10-27 燕山大学 A kind of vehicle detection at night method based on image
CN107786815B (en) * 2016-08-30 2020-02-21 比亚迪股份有限公司 Active night vision self-adaptive exposure method and system and vehicle
CN112071079B (en) * 2020-09-07 2022-06-07 浙江师范大学 Machine vision vehicle high beam detection early warning system based on 5G transmission
CN112927502B (en) * 2021-01-21 2023-02-03 广州小鹏自动驾驶科技有限公司 Data processing method and device
CN112906504B (en) * 2021-01-29 2022-07-12 浙江安谐智能科技有限公司 Night vehicle high beam opening state discrimination method based on double cameras
CN113222870B (en) * 2021-05-13 2023-07-25 杭州海康威视数字技术股份有限公司 Image processing method, device and equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101112315A (en) * 2007-08-24 2008-01-30 珠海友通科技有限公司 X-ray human body clairvoyance image automatic anastomosing and splicing method
CN101277429A (en) * 2007-03-27 2008-10-01 中国科学院自动化研究所 Method and system for amalgamation process and display of multipath video information when monitoring

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010149913A1 (en) * 2009-06-23 2010-12-29 France Telecom Encoding and decoding a video image sequence by image areas

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101277429A (en) * 2007-03-27 2008-10-01 中国科学院自动化研究所 Method and system for amalgamation process and display of multipath video information when monitoring
CN101112315A (en) * 2007-08-24 2008-01-30 珠海友通科技有限公司 X-ray human body clairvoyance image automatic anastomosing and splicing method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
《基于非对称融合策略的双色红外弱目标检测方法》;李秋华 等;《信号处理》;20090531;第25卷(第5期);第713-719页 *
《机动车辆车灯检测系统的实现研究》;赵明富,雷建军,李太福;《辽宁工程技术大学学报》;20030630;第22卷(第3期);第362-364页 *
《模糊逻辑在分布式多目标跟踪融合中的应用研究》;陈小惠,万德钧,王庆;《东南大学学报(自然科学版)》;20031130;第33卷(第6期);第754-757页 *

Also Published As

Publication number Publication date
CN103164685A (en) 2013-06-19

Similar Documents

Publication Publication Date Title
CN103164685B (en) Vehicle lamp detection method and car light detection device
CN104723991B (en) Parking assistance system and parking assistance method for vehicle
US9047518B2 (en) Method for the detection and tracking of lane markings
Hautière et al. Real-time disparity contrast combination for onboard estimation of the visibility distance
Hoffman et al. Vehicle detection fusing 2D visual features
US9123242B2 (en) Pavement marker recognition device, pavement marker recognition method and pavement marker recognition program
Messelodi et al. Intelligent extended floating car data collection
CN105528914A (en) Driver assisting apparatus and driver assisting method
KR101032160B1 (en) System and method for road visibility measurement using camera
Tae-Hyun et al. Detection of traffic lights for vision-based car navigation system
US9965690B2 (en) On-vehicle control device
JP6139088B2 (en) Vehicle detection device
JP2015090679A (en) Vehicle trajectory extraction method, vehicle region extraction method, vehicle speed estimation method, vehicle trajectory extraction program, vehicle region extraction program, vehicle speed estimation program, vehicle trajectory extraction system, vehicle region extraction system, and vehicle speed estimation system
JP2005318408A (en) Vehicle surrounding monitoring apparatus and method
JP2011227657A (en) Device for monitoring periphery of vehicle
Tang et al. Robust vehicle surveillance in night traffic videos using an azimuthally blur technique
JPH05157558A (en) Vehicular gap detector
JPH07244717A (en) Travel environment recognition device for vehicle
JP3844750B2 (en) Infrared image recognition device and alarm device using infrared image recognition device
JPH11211845A (en) Rainfall/snowfall detecting method and its device
JP4972596B2 (en) Traffic flow measuring device
JP4506299B2 (en) Vehicle periphery monitoring device
JP2004362265A (en) Infrared image recognition device
JP2002163645A (en) Device and method for detecting vehicle
Gumpp et al. Recognition and tracking of temporary lanes in motorway construction sites

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant