CN106845453A - Taillight detection and recognition methods based on image - Google Patents

Taillight detection and recognition methods based on image Download PDF

Info

Publication number
CN106845453A
CN106845453A CN201710101262.0A CN201710101262A CN106845453A CN 106845453 A CN106845453 A CN 106845453A CN 201710101262 A CN201710101262 A CN 201710101262A CN 106845453 A CN106845453 A CN 106845453A
Authority
CN
China
Prior art keywords
taillight
image
rec
region
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710101262.0A
Other languages
Chinese (zh)
Other versions
CN106845453B (en
Inventor
谢刚
续欣莹
谢新林
白博
郭磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taiyuan University of Technology
Original Assignee
Taiyuan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taiyuan University of Technology filed Critical Taiyuan University of Technology
Priority to CN201710101262.0A priority Critical patent/CN106845453B/en
Publication of CN106845453A publication Critical patent/CN106845453A/en
Application granted granted Critical
Publication of CN106845453B publication Critical patent/CN106845453B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Abstract

The invention discloses a kind of taillight detection based on image and recognition methods, the realtime graphic of the front vehicles that the method is collected using common camera, pre-processed by Grads Sharp and image cut;Adaptive threshold fuzziness is carried out with reference to HSI and RGB color, the colouring information of taillight is extracted;By filtering and noise reduction and morphological transformation, profile is extracted and using geometrical condition constraint with group taillight;Based on SVM to status information layered shaping, and export the semantic interpretation of preceding tail-light image.The present invention has preferable treatment effect and processing capability in real time as the important ring in vehicle-mounted advanced drive assist system for the detection and status information decision problem of preceding tail-light under complicated urban environment.

Description

Taillight detection and recognition methods based on image
Technical field
The invention belongs to image processing field, and in particular to a kind of taillight detection and recognition methods based on image.
Background technology
Traffic safety problem is a global problem, how to help driver using intelligent driving accessory system Evade the topic that security risk turns into instantly popular.Intelligent driving accessory system is laid particular emphasis on to the environment of surrounding has one entirely The perception in face, than such as to the information such as the relevant road of driver, surrounding vehicles, traffic sign, so as to help driver to vapour The travel route of car has a planning for safety.Presently relevant research focuses mostly in Road Detection, traffic lights identification, pedestrian detection It is less with the aspect such as obstacle recognition, but research for surrounding vehicles on the transport condition influence of this car.The taillight letter of front truck Breath be lamp signal as the important means to the Da Benche route plannings of other meter for vehicle, be the emphasis institute of environment research around .
At present, taillight detection Study of recognition concentrates on vehicle detection at night, and the method for use has extraction taillight shape, face Color, motion feature etc..Because the brightness of tail-light before night is larger, for the image capture device of low cost, in taillight The convenient stabilization of extraction comparison of color characteristic is carried out during detection to the image for collecting.It is different when color characteristic is processed Researcher has selected different color model to be processed according to the emphasis for the treatment of.Such as Nagumo S scholars (Nagumo S, Hasegawa H,Okamoto N.Extraction of forward vehicles by front-mounted camera using brightness information[C]//Electrical and Computer Engineering, 2003.IEEE CCECE 2003.Canadian Conference on.IEEE,2003,2:1243-1246) select YCrCb face Color model is partitioned into night tail-light region and taillight is verified to matching with key position feature;O'Malley R et al. (O'Malley R,Jones E,Glavin M.Rear-lamp vehicle detection and tracking in low- exposure color video for night conditions[J],Intelligent Transportation Systems,IEEE Transactions on,2010,11(2):Automobile tail light area is carried out using HSV models 453-462) Research, using substantial amounts of real scene image sample, statistical analysis have obtained taillight segmentation threshold in HSV space;Liu Bo et al. (Liu Bo, Zhou Heqin, the Wei Ming rising sun are based on vehicle detection at night method [J] Journal of Image and Graphics of color and movable information: A volumes, 2005,10 (2):187-191.) studied in RGB color, weighed the red component ratio of each pixel, To determine whether pixel belongs to taillight region, then taillight pair is matched with track algorithm.But the studies above is due to just for list One color space carries out the detection segmentation in taillight region, can have extremely strong dependence to the color space for using, and causes information It is imperfect, severe patient can be such that the taillight region cannot extracts.Additionally, most research extractions have detected the taillight region of front truck simultaneously For the real-time tracking of vehicle, the lamp signal information contained to taillight image does not do semantic interpretation.
How accuracy and robustness that complicated urban environment lower tail lamp inspection survey are improved, while entering to the lamp signal information of taillight Row semantic interpretation is always advanced drive assist system (ADAS) key issue urgently to be resolved hurrily.The present invention proposes one kind and is based on The taillight detection and recognition methods of image, can effectively improve the accuracy of detection in preceding tail-light region, and to taillight status image Semantic interpretation is carried out, vacancy of the existing method in taillight state recognition field is made up.
The content of the invention
The present invention is in order to solve taillight inspection caused by the property depended on unduly of the existing taillight detection method to solid color space The low problem of accuracy rate is surveyed, while in order to make up the vacancy in taillight state recognition direction existing method, it is proposed that one kind is based on The taillight detection and recognition methods of image.The realtime graphic of the front vehicles that the method is collected using common camera, leads to Cross Grads Sharp and image cut is pre-processed;Adaptive threshold fuzziness is carried out with reference to HSI and RGB color, tail is extracted The colouring information of lamp;By filtering and noise reduction and morphological transformation, profile is extracted and using geometrical condition constraint with group taillight;It is based on SVM exports the semantic interpretation of preceding tail-light image to status information layered shaping.
The present invention adopts the following technical scheme that realization:
A kind of taillight detection and recognition methods based on image, comprise the following steps:
Step S1, the realtime graphic that front vehicles are gathered using common camera, and it is real using the enhancing of Grads Sharp method When image contrast, then realtime graphic is cut, and take the part of lower section 4/5 of image as altimetric image to be checked.
Step S2, the information characteristics exchange method using polychrome color space, treat detection image and are split, and thus obtain Taillight area image;Comprise the following steps that:
Step S21), the coloured image that step S1 is obtained is transferred to HSI spaces, dividing strip is characterized as with red after normalization Part, primary segmentation taillight region is obtained using following Threshold segmentation:
The pixel that threshold condition will be met is designated as white, and ungratified pixel is designated as black;
Step S22), using the computing between image, first add and subtract afterwards, the segmentation knot in HSI spaces is retained on step S1 images Really;
Step S23), in rgb color space, the color images that step S22 is obtained be three Color Channels, profit Row threshold division is entered to red channel image with adaptive threshold fuzziness method, with step S21, meet condition pixel be designated as it is white Color, ungratified pixel is designated as black, the taillight region of the segmentation that obtains becoming more meticulous.
Step S3, extraction profile information, it is one group of taillight pair to match most like taillight region using geometry constraint conditions; Comprise the following steps that:
Step S31), denoising is filtered to the taillight area image that step S2 is obtained, eliminate image present in small green pepper Salt noise spot;Morphological scale-space is carried out, the hole of the taillight intra-zone for isolating is made up, promotes former connected domain to continue the company of holding It is logical, obtain many irregular connected domains;
Step S32), the irregular connected domain in the images that obtain of traversal step S31, obtain the minimum external square of connected domain Shape, and the geological information of boundary rectangle is stored, it is same group by most like two connected domains matching to use geometry constraint conditions Taillight pair, and the taillight that be will match on the image that step 1 is obtained to area identification out.
Step S4, the characteristic of division vector of construction multicolour spatial information fusion, for the training and test of SVM;Specifically Step is as follows:
Step S41), the taillight that obtains step S32 region of interest ROI is set to region, reduce the complexity for the treatment of Degree, improves recognition efficiency;
Step S42), the ROI image that step S41 is obtained is transferred to L*a*b* spaces, and be divided into tri- passages of L*, a*, b*; The ROI image that step S41 is obtained is transferred to HSV space, and is divided into tri- passages of H, S, V;Fusion L*, S, V channel information construction Color space L*SV, and point or so taillight region, try to achieve the taillight region detected under the space each color channel it is average Gray value and whole taillight are to the average gray value on L* the and V passages in region;Color space fuse information is arranged as 8 dimensions Characteristic vector.
Step S5, using support vector machines layering judge the lamp signal information that taillight contains;Comprise the following steps that:
Step S51), 3 class sample sets are chosen in image data base, the positive negative sample of classifying obtains 3 according to above-mentioned steps The matrix that characteristic vector is constituted, wherein positive sample are labeled as 1, and negative sample is labeled as -1;Input matrix SVM is trained, is obtained To 3 SVM classifiers for being used for test, respectively SVM1, SVM2, SVM3;
Step S52), the SVM classifier that is trained by step S51 of the characteristic vector that obtains step S4 input enter end of line Stratification state identification and the judgement of lamp lamp signal, and will determine that result is converted into mark.
Step S6, the status indication according to step S52, export corresponding semantic interpretation, the knowledge of tail-light state before completing Not.
The present invention as the important ring in vehicle-mounted advanced drive assist system, for preceding tail-light under complicated urban environment Detection and status information decision problem have preferable treatment effect and processing capability in real time.
The inventive method has the beneficial effect that:
1st, the detection based on multiple color spaces can avoid the uncertainty of single space detection, while to taillight region Detection segmentation is more accurate.
2nd, the taillight based on profile employs more constraints to matching process, it is ensured that less error hiding and High accuracy of the taillight to positioning.
3rd, the characteristic of division vector dimension for SVM detections is relatively low, the model training time of SVM is greatly reduced, significantly Enhance judging efficiency.
4th, with different levels SVM determination methods are capable of the virtual condition of efficiently and accurately classification taillight, while reducing to single The judgement dependence and training complexity of SVM.
5th, for wrongheaded red car is easily caused, user-defined feature vector extracting method can be correct The red vehicle for the treatment of, compensate for the deficiency of existing method.
Brief description of the drawings
Fig. 1 represents the flow chart of the taillight detection based on image of the present invention and recognition methods.
Fig. 2 a represent input artwork.
Fig. 2 b represent the altimetric image to be checked by sharpening and cut.
Fig. 3 a represent image to be split.
Fig. 3 b represent the final segmentation result in taillight region of multiple color spaces.
Fig. 3 c represent the taillight area image after filtering and noise reduction and Morphological scale-space.
Fig. 3 d represent matching taillight pair and the image for marking.
Fig. 4 represents flow chart of the taillight to matching.
Fig. 5 represents the layering determination strategy of SVM.
Fig. 6 a represent bicycle recognition result.
Fig. 6 b represent many vehicle identification results.
Specific embodiment
Specific embodiment of the invention is described in detail below in conjunction with the accompanying drawings.
A kind of taillight detection and recognition methods based on image, such as Fig. 1 is method flow diagram, is comprised the following steps:
Step S1, the pretreatment that image is carried out to input picture, the contrast of artwork, ladder are strengthened using Grads Sharp method Degree sharpen use laplacian spectral radius method, the laplace kernel for using for:
And artwork is cut, the part of lower section 4/5 of artwork is taken as the actually detected region of taillight;If Fig. 2 a are input artwork, Fig. 2 b are the altimetric image to be checked by sharpening and cut.
Step S2, based on color characteristic segmentation obtain taillight region, using the information characteristics exchange method of polychrome color space, Comprise the following steps that:
Step S21), the coloured image that step S1 is obtained is transferred to HSI spaces, dividing strip is characterized as with red after normalization Part, primary segmentation taillight region is obtained using following Threshold segmentation:
The pixel that threshold condition will be met is designated as white, that is, it is 255, ungratified pixel mark to set grey scale pixel value It is black, that is, it is 0 to set grey scale pixel value.
Step S22), using the computing between image, first add and subtract afterwards, the segmentation knot in HSI spaces is retained on step S1 images Really.
Step S23), in rgb color space, the color images that step S22 is obtained be three Color Channel R, G, B, to R passages are that red channel image enters row threshold division using adaptive threshold fuzziness method, here using side between maximum kind Difference method is obtained in that preferable segmentation effect also known as Da-Jin algorithm (OTSU methods) segmentation, with step S21, meets the pixel mark of condition It is white, it is 255 to set grey scale pixel value, and ungratified pixel is designated as black, and it is 0 to set grey scale pixel value, obtains fine Change the taillight region of segmentation;If Fig. 3 a are image to be split, Fig. 3 b are the final segmentation result in taillight region of multiple color spaces.
Step S3, extraction profile information, it is one group of taillight pair to match most like taillight region using geometry constraint conditions, Comprise the following steps that:
Step S31) the taillight area image that is obtained to step S2 is filtered denoising using medium filtering mode, eliminates figure The small salt-pepper noise point as present in;Morphological scale-space is carried out, is expanded after closed operation several times, make up the taillight region isolated Internal hole, promotes former connected domain to continue to keep connection, obtains many irregular connected domains;Wherein needed for Morphological scale-space Structural element is defined as the ellipse of 5*5, and closed operation has been carried out 7 times, and expansion has been carried out 3 times, can either maximize reservation taillight area Domain original feature, and do not expand taillight region;If Fig. 3 b are the final segmentation result in taillight region of multiple color spaces, Fig. 3 c are filtering Taillight area image after denoising and Morphological scale-space.
Step S32), the irregular connected domain in the images that obtain of traversal step S31, try to achieve the profile of each connected domain, Profile is approached with rectangle again and obtains the minimum enclosed rectangle of connected domain, and store the geological information of boundary rectangle, using geometry about Most like two connected domains matching is same group of taillight pair by beam condition, and will be detected on the image that step S1 is obtained Taillight region is identified that the taillight for matching is identified to region with red block with green frame;If Fig. 3 a are mapping to be checked Picture, Fig. 3 d are the image for matching taillight pair and marking;The minimum enclosed rectangle profile information tried to achieve saves as long comprising rectangle (L), (W) wide, area (A), center point coordinate (Midx, Midy) Array for structural body Rec;Taillight is to the geometry in matching process Constraints is specific as follows, is related to the pass of two length and widths of profile, area similarity and center point coordinate and image coordinate System:
Kx1×Rec[i].L≤|Rec[i].Midx-Rec[k].Midx|≤Kx2×Rec[i].L
|Rec[i].A-Rec[k].A|≤KA×min{Rec[i].A,Rec[k].A}
KW1×Rec[i].W≤Rec[k].W≤KW2×Rec[i].W
KL1×Rec[i].L≤Rec[k].L≤KL2×Rec[i].L
|Rec[i].Midy-Rec[k].Midy|≤Ky
Wherein Rec [i], Rec [k] are two profiles to be matched, Kx1=1, Kx2=10, KA=2, KW1=0.5, KW2= 1.5、KL1=0.5, KL2=1.5, Ky=13 is the Study first via the subjective setting of experiment, if the connected domain contoured surface for detecting Product is excessive, and 3000≤Rec [i] .A≤30000 are set to herein, it is likely that be front truck vehicle body for red and be not that misrecognition is whole It is a profile to open image, is not transferred to matching taillight to process, is directly transferred to step S4 user-defined features vector extraction process, such as Fig. 4 is flow chart of the taillight to matching.
Step S4, the characteristic of division vector of construction multicolour spatial information fusion, for the training and test of SVM, specifically Step is as follows:
Step S41), the taillight that obtains step S3 area-of-interest (ROI) is set to region, reduce the complexity for the treatment of Degree, improves recognition efficiency;If step S3 is defined as red vehicle after testing, the detection zone area caused to red vehicle body is excessive Problem targetedly processes strategy, and the self-defined whole body portion for detecting is ROI region, and is made by oneself in step S42 Adopted left and right taillight extracted region characteristic vector.
Step S42), the ROI image that step S41 is obtained is transferred to L*a*b* spaces, and be divided into tri- passages of L*, a*, b*; The ROI image that step S41 is obtained is transferred to HSV space, and is divided into tri- passages of H, S, V;Fusion L*, S, V channel information construction Color space L*SV, and the taillight detected under the space is tried to achieve to data separation or so taillight region according to the taillight for storing The average gray value and whole taillight of each color channel in region are to the average gray value on L* the and V passages in region;By color Color space fuse information is arranged as 8 dimensional feature vectors, specifically put in order for:L*, S, V passage average gray value of left lamp, it is right L*, S, V passage average gray value of lamp, taillight is to ROI image L*, V passage average gray value;Due to the polychrome color space for constructing Information fusion characteristic vector dimension is relatively low, training SVM models time and efficiency on than existing methods for lifted amplitude it is big.
Step S5, using SVMs (SVM) layering judge the lamp signal information that taillight contains, complete taillight group is general Should be comprising parts such as steering, brake, rear position, emergent, fog lamp and back-up lamps, these taillights are for heel row vehicle in driving conditions The mode of action of prompting is simultaneously differed, but in general difference is only light on and off form and color state;It is wherein most worth to grind What is studied carefully is to turn to the performance situation with brake lamp, and steering indicating light indicates whether the current travel route of vehicle needs to change, As turned or becoming track;Brake lamp indicates whether the current travel speed of vehicle needs change, such as slows down or emergency brake, therefore pass through Primary study turns to the state change with brake lamp, effectively can hold front vehicles information, comprises the following steps that:
Step S51), for the image sources training and test in actual photographed picture, Internet resources image and standard Data set, the picture in the case of selecting different time, different weather, different light, different taillight configurations etc. various sets up image Database, chooses 3 class sample sets in image data base, positive negative sample of classifying, and 3 characteristic vector structures are obtained according to above-mentioned steps Into matrix, wherein positive sample be labeled as 1, negative sample be labeled as -1;Input matrix SVM is trained, 3 is obtained for surveying The SVM classifier of examination, respectively SVM1, SVM2, SVM3;Wherein, the problem that whether turns to of SVM1 treatment, i.e., by the of SVM First-level class device, will determine the state recognition classification for turning to or not turning to;The problem whether SVM2 treatment brakes, that is, pass through The second level grader of SVM, will determine the state recognition classification that brake or lamp do not work;SVM3 processes the problem of steering direction, I.e. by the third level grader of SVM, the state recognition classification turned left or turn right is determined;Therefore choosing 3 classes is used for difference The sample set of purpose of classifying, wherein for positive sample in the sample set of SVM1 training to turn to vehicle image, negative sample is brake Or normally travel vehicle image;It is brake vehicle image for positive sample in the sample set of SVM2 training, negative sample is normal row Sail vehicle image;It is right-turning vehicles image for positive sample in the sample set of SVM3 training, negative sample is left turning vehicle image;
Step S52), the SVM classifier that is trained by step S51 of 8 dimensional feature vectors that obtain step S4 input enters Stratification state identification and the judgement of end of line lamp lamp signal, and will determine that result is converted into mark;Taillight state layering based on SVM is sentenced Disconnected step such as Fig. 5 is the layering determination strategy of SVM, specific as follows:
Step S521), judged in the characteristic vector that obtains step S4 input SVM1, if judged result is non-turn To, step S522 is transferred to, otherwise it is transferred to step S523;
Step S522), characteristic vector input SVM2 judged, if judged result for brake, put flag bit to brake shape State, otherwise puts flag bit and is not worked state to lamp;
Step S523), characteristic vector input SVM3 judged, if judged result for turn left, put flag bit to left-hand rotation shape State, otherwise puts flag bit to right turn state.
Step S6, the status indication according to step S52, export corresponding semantic interpretation, the knowledge of tail-light state before completing Not, herein only with the condition adjudgement result of tail-light before written form output, wherein " hold " represents that front truck is in normally travel The state that i.e. lamp does not work, " stop " represents that front truck is in braking state, and " turn left " represents that front truck is in left turn state, " turn right " represents that front truck is in right turn state, and such as Fig. 6 a are bicycle recognition result, and Fig. 6 b are many vehicle identification results.
The experimental situation of specific embodiment is that VS2015 carries OpenCV3.1 storehouses in the present invention, based on 64 win10 of individual System PC, configuration CPU Intel (R) Core (TM) i5-6300HQ@2.30GHz, internal memory 8GB 2133MHz.Program code is based on C++ programming languages are write, and wherein image procossing has used the treatment function in OpenCV storehouses, and SVM classifier has used CvSVM, And RBF radial kernels are have chosen for the training of model.
The above is only the preferred embodiments of the present invention, and any formal limitation is not made to the present invention, It is every according to technical spirit of the invention to any simple modification made for any of the above embodiments, equivalent variations belong to the present invention In the range of technical scheme.

Claims (7)

1. a kind of taillight detection and recognition methods based on image, it is characterised in that:Comprise the following steps:
Step S1), the realtime graphic of front vehicles is gathered using common camera, and using Grads Sharp method enhancing figure in real time The contrast of picture, then cuts to realtime graphic, and takes the part of lower section 4/5 of image as altimetric image to be checked;
Step S2), using the information characteristics exchange method of polychrome color space, treat detection image and split, thus obtain tail Lamp area image;Comprise the following steps that:
Step S21), the coloured image that step S1 is obtained is transferred to HSI spaces, segmentation condition is characterized as with red after normalization, Primary segmentation taillight region is obtained using following Threshold segmentation:
H ∈ ( 0.0 , 0.02 ) ∪ ( 0.95 , 1.0 ) S ∈ ( 0.18 , 1.0 ) I ∈ ( 0.1 , 0.6 )
The pixel that threshold condition will be met is designated as white, and ungratified pixel is designated as black;
Step S22), using the computing between image, first add and subtract afterwards, the segmentation result in HSI spaces is retained on step S1 images;
Step S23), in rgb color space, the color images that step S22 is obtained be three Color Channels, using from Adapt to thresholding method and row threshold division entered to red channel image, with step S21, the pixel for meeting condition is designated as white, Ungratified pixel is designated as black, the taillight region of the segmentation that obtains becoming more meticulous;
Step S3), extract profile information, it is one group of taillight pair to match most like taillight region using geometry constraint conditions;Tool Body step is as follows:
Step S31), denoising is filtered to the taillight area image that step S2 is obtained, eliminate image present in the small spiced salt make an uproar Sound point;Morphological scale-space is carried out, the hole of the taillight intra-zone for isolating is made up, promotes former connected domain to continue to keep connection, obtained To many irregular connected domains;
Step S32), the irregular connected domain in the images that obtain of traversal step S31, obtain the minimum enclosed rectangle of connected domain, And the geological information of boundary rectangle is stored, it is same group of taillight by most like two connected domains matching to use geometry constraint conditions It is right, and the taillight that be will match on the image that step 1 is obtained to area identification out;
Step S4), the characteristic of division vector of construction multicolour spatial information fusion, for the training and test of SVM;Specific steps It is as follows:
Step S41), the taillight that obtains step S32 region of interest ROI is set to region, reduce the complexity for the treatment of, carry Recognition efficiency high;
Step S42), the ROI image that step S41 is obtained is transferred to L*a*b* spaces, and be divided into tri- passages of L*, a*, b*;Will step The ROI image that rapid S41 is obtained is transferred to HSV space, and is divided into tri- passages of H, S, V;Fusion L*, S, V channel information construction color Space L*SV, and point or so taillight region, try to achieve the average gray of each color channel in the taillight region detected under the space Value and whole taillight are to the average gray value on L* the and V passages in region;Color space fuse information is arranged as 8 dimensional features Vector;
Step S5), using support vector machines layering judge the lamp signal information that taillight contains;Comprise the following steps that:
Step S51), 3 class sample sets are chosen in image data base, the positive negative sample of classifying obtains 3 features according to above-mentioned steps The matrix that vector is constituted, wherein positive sample are labeled as 1, and negative sample is labeled as -1;Input matrix SVM is trained, 3 are obtained For the SVM classifier tested, respectively SVM1, SVM2, SVM3;
Step S52), the SVM classifier that is trained by step S51 of the characteristic vector that obtains step S4 input carry out taillight lamp Stratification state identification and the judgement of language, and will determine that result is converted into mark;
Step S6), according to the status indication of step S52, export corresponding semantic interpretation, the identification of tail-light state before completing.
2. taillight detection and recognition methods based on image according to claim 1, it is characterised in that:Step S1) in, it is right Input picture carries out the pretreatment of image, and the contrast of artwork is strengthened using Grads Sharp method, and Grads Sharp uses La Pula This sharpening method, the laplace kernel for using for:
L a p l a c e K e r n e l = 0 - 1 0 - 1 5 - 1 0 - 1 0
And artwork is cut, the part of lower section 4/5 of artwork is taken as the actually detected region of taillight.
3. taillight detection and recognition methods based on image according to claim 1, it is characterised in that:Step S23) in, The adaptive threshold fuzziness method used to taillight region fine segmentation in rgb space is maximum variance between clusters.
4. taillight detection and recognition methods based on image according to claim 1, it is characterised in that:Step S31) in, Structural element needed for Morphological scale-space is defined as the ellipse of 5*5, maximizes and retains taillight region original feature;
Step S32) in, the minimum enclosed rectangle profile information tried to achieve save as comprising rectangle (L) long, wide (W), area (A), in Heart point coordinates (Midx, Midy) Array for structural body Rec;Taillight is specific as follows to the geometry constraint conditions in matching process, is related to The relation of two length and widths of profile, area similarity and center point coordinate and image coordinate:
Kx1×Rec[i].L≤|Rec[i].Midx-Rec[k].Midx|≤Kx2×Rec[i].L
|Rec[i].A-Rec[k].A|≤KA×min{Rec[i].A,Rec[k].A}
KW1×Rec[i].W≤Rec[k].W≤KW2×Rec[i].W
KL1×Rec[i].L≤Rec[k].L≤KL2×Rec[i].L
|Rec[i].Midy-Rec[k].Midy|≤Ky
Wherein Rec [i], Rec [k] are two profiles to be matched, Kx1、Kx2、KA、KW1、KW2、KL1、KL2、KyIt is to be led via experiment The Study first of setting is seen, is that front truck vehicle body is red and is not misrecognition if the connected domain contour area for detecting is excessive Whole image is a profile, is not transferred to matching taillight to process, is directly transferred to step S4 user-defined features vector extraction process.
5. taillight detection and recognition methods based on image according to claim 4, it is characterised in that:Step S41) in, If step S3 is defined as red vehicle after testing, the detection zone area problems of too caused to red vehicle body is targetedly located Reason strategy, the self-defined whole body portion for detecting is ROI region, and self-defined left and right taillight region carries in step S42 Take characteristic vector.
6. taillight detection and recognition methods based on image according to claim 5, it is characterised in that:Step S51) in, The problem whether SVM1 treatment turns to, i.e., by the first order grader of SVM, will determine the state recognition for turning to or not turning to Classification;The problem whether SVM2 treatment brakes, i.e., by the second level grader of SVM, will determine the shape that brake or lamp do not work State identification classification;SVM3 processes the problem of steering direction, i.e., by the third level grader of SVM, to determine to turn left or turn right State recognition classification;Therefore choosing 3 classes is used for the sample set of different classifications purpose, wherein in the sample set of SVM1 training To turn to vehicle image, negative sample is brake or normally travel vehicle image to positive sample;In the sample set trained for SVM2 just Sample is brake vehicle image, and negative sample is normally travel vehicle image;It is the right side for positive sample in the sample set of SVM3 training Turn vehicle image, negative sample is left turning vehicle image;
Step S52) based on SVM taillight state layering judge that step is specific as follows:
Step S521), judged in the characteristic vector that obtains step S4 input SVM1, if judged result is non-steering, turn Enter step S522, be otherwise transferred to step S523;
Step S522), characteristic vector input SVM2 judged, if judged result for brake, put flag bit to braking state, it is no Flag bit is then put not worked state to lamp;
Step S523), characteristic vector input SVM3 judged, if judged result for turn left, put flag bit to left turn state, it is no Then put flag bit to right turn state.
7. taillight detection and recognition methods based on image according to claim 6, it is characterised in that:Step S6) foundation The status indication of step S52, exports corresponding semantic interpretation, the identification of tail-light state before completing, herein only with written form The condition adjudgement result of tail-light before output, wherein " hold " represents front truck in the state that normally travel is that lamp does not work, " stop " represents that front truck is in braking state, and " turn left " represents that front truck is in left turn state, before " turn right " is represented Car is in right turn state.
CN201710101262.0A 2017-02-24 2017-02-24 Taillight detection and recognition methods based on image Expired - Fee Related CN106845453B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710101262.0A CN106845453B (en) 2017-02-24 2017-02-24 Taillight detection and recognition methods based on image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710101262.0A CN106845453B (en) 2017-02-24 2017-02-24 Taillight detection and recognition methods based on image

Publications (2)

Publication Number Publication Date
CN106845453A true CN106845453A (en) 2017-06-13
CN106845453B CN106845453B (en) 2019-10-15

Family

ID=59134504

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710101262.0A Expired - Fee Related CN106845453B (en) 2017-02-24 2017-02-24 Taillight detection and recognition methods based on image

Country Status (1)

Country Link
CN (1) CN106845453B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107316010A (en) * 2017-06-13 2017-11-03 武汉理工大学 A kind of method for recognizing preceding vehicle tail lights and judging its state
CN107392116A (en) * 2017-06-30 2017-11-24 广州广电物业管理有限公司 A kind of indicator lamp recognition methods and system
CN107463886A (en) * 2017-07-20 2017-12-12 北京纵目安驰智能科技有限公司 A kind of double method and systems for dodging identification and vehicle obstacle-avoidance
CN107679508A (en) * 2017-10-17 2018-02-09 广州汽车集团股份有限公司 Road traffic sign detection recognition methods, apparatus and system
CN107992810A (en) * 2017-11-24 2018-05-04 智车优行科技(北京)有限公司 Vehicle identification method and device, electronic equipment, computer program and storage medium
CN108357418A (en) * 2018-01-26 2018-08-03 河北科技大学 A kind of front truck driving intention analysis method based on taillight identification
CN110276742A (en) * 2019-05-07 2019-09-24 平安科技(深圳)有限公司 Tail light for train monitoring method, device, terminal and storage medium
CN111108505A (en) * 2017-09-20 2020-05-05 图森有限公司 System and method for detecting tail light signal of vehicle
CN111696224A (en) * 2020-04-20 2020-09-22 深圳奥尼电子股份有限公司 Automobile data recorder and intelligent security alarm method thereof
CN112084940A (en) * 2020-09-08 2020-12-15 南京和瑞供应链管理有限公司 Material checking management system and method
CN112699781A (en) * 2020-12-29 2021-04-23 上海眼控科技股份有限公司 Vehicle lamp state detection method and device, computer equipment and readable storage medium
CN112927502A (en) * 2021-01-21 2021-06-08 广州小鹏自动驾驶科技有限公司 Data processing method and device
CN113177949A (en) * 2021-04-16 2021-07-27 中南大学 Large-size rock particle feature identification method and device
WO2022048336A1 (en) * 2020-09-04 2022-03-10 International Business Machines Corporation Coarse-to-fine attention networks for light signal detection and recognition
US11776281B2 (en) 2020-12-22 2023-10-03 Toyota Research Institute, Inc. Systems and methods for traffic light detection and classification
CN111108505B (en) * 2017-09-20 2024-04-26 图森有限公司 System and method for detecting a tail light signal of a vehicle

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103984917A (en) * 2014-04-10 2014-08-13 杭州电子科技大学 Multi-feature nighttime vehicle detection method based on machine vision

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103984917A (en) * 2014-04-10 2014-08-13 杭州电子科技大学 Multi-feature nighttime vehicle detection method based on machine vision

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
AKHAN ALMAGAMBETOV ET AL.: "Autonomous Tracking of Vehicle Taillights from a Mobile Platform using an Embedded Smart Camera", 《2012 IEEE INTERNATIONAL CONFERENCE ON DISTRIBUTED SMART CAMERAS》 *
AKHAN ALMAGAMBETOV ET AL.: "Robust and Computationally Lightweight Autonomous Tracking of Vehicle Taillights and Signal Detection by Embedded Smart Cameras", 《IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS》 *
MAURICIO CASARES ET AL.: "A Robust Algorithm for the Detection of Vehicle Turn Signals and Brake Lights", 《2012 IEEE NINTH INTERNATIONAL CONFERENCE ON ADVANCED VIDEO AND SIGNAL-BASED SURVEILLANCE》 *
刘勃 等: "基于颜色和运动信息的夜间车辆检测方法", 《中国图象图形学报》 *
田强: "车辆尾灯的检测与灯语识别", 《万方学位论文数据库》 *
范红武 等: "基于视觉自主车的尾灯灯语识别方法研究", 《人工智能及识别技术》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107316010A (en) * 2017-06-13 2017-11-03 武汉理工大学 A kind of method for recognizing preceding vehicle tail lights and judging its state
CN107392116A (en) * 2017-06-30 2017-11-24 广州广电物业管理有限公司 A kind of indicator lamp recognition methods and system
CN107463886B (en) * 2017-07-20 2023-07-25 北京纵目安驰智能科技有限公司 Double-flash identification and vehicle obstacle avoidance method and system
CN107463886A (en) * 2017-07-20 2017-12-12 北京纵目安驰智能科技有限公司 A kind of double method and systems for dodging identification and vehicle obstacle-avoidance
CN111108505B (en) * 2017-09-20 2024-04-26 图森有限公司 System and method for detecting a tail light signal of a vehicle
CN111108505A (en) * 2017-09-20 2020-05-05 图森有限公司 System and method for detecting tail light signal of vehicle
CN107679508A (en) * 2017-10-17 2018-02-09 广州汽车集团股份有限公司 Road traffic sign detection recognition methods, apparatus and system
CN107992810A (en) * 2017-11-24 2018-05-04 智车优行科技(北京)有限公司 Vehicle identification method and device, electronic equipment, computer program and storage medium
CN108357418A (en) * 2018-01-26 2018-08-03 河北科技大学 A kind of front truck driving intention analysis method based on taillight identification
CN110276742A (en) * 2019-05-07 2019-09-24 平安科技(深圳)有限公司 Tail light for train monitoring method, device, terminal and storage medium
CN111696224A (en) * 2020-04-20 2020-09-22 深圳奥尼电子股份有限公司 Automobile data recorder and intelligent security alarm method thereof
WO2022048336A1 (en) * 2020-09-04 2022-03-10 International Business Machines Corporation Coarse-to-fine attention networks for light signal detection and recognition
GB2614829A (en) * 2020-09-04 2023-07-19 Ibm Coarse-to-fine attention networks for light signal detection and recognition
US11741722B2 (en) 2020-09-04 2023-08-29 International Business Machines Corporation Coarse-to-fine attention networks for light signal detection and recognition
CN112084940A (en) * 2020-09-08 2020-12-15 南京和瑞供应链管理有限公司 Material checking management system and method
US11776281B2 (en) 2020-12-22 2023-10-03 Toyota Research Institute, Inc. Systems and methods for traffic light detection and classification
CN112699781A (en) * 2020-12-29 2021-04-23 上海眼控科技股份有限公司 Vehicle lamp state detection method and device, computer equipment and readable storage medium
CN112927502A (en) * 2021-01-21 2021-06-08 广州小鹏自动驾驶科技有限公司 Data processing method and device
CN113177949A (en) * 2021-04-16 2021-07-27 中南大学 Large-size rock particle feature identification method and device
CN113177949B (en) * 2021-04-16 2023-09-01 中南大学 Large-size rock particle feature recognition method and device

Also Published As

Publication number Publication date
CN106845453B (en) 2019-10-15

Similar Documents

Publication Publication Date Title
CN106845453B (en) Taillight detection and recognition methods based on image
US8184159B2 (en) Forward looking sensor system
CN103984950B (en) A kind of moving vehicle brake light status recognition methods for adapting to detection on daytime
CN109190523B (en) Vehicle detection tracking early warning method based on vision
WO2020000253A1 (en) Traffic sign recognizing method in rain and snow
CN105488453A (en) Detection identification method of no-seat-belt-fastening behavior of driver based on image processing
Kuang et al. Feature selection based on tensor decomposition and object proposal for night-time multiclass vehicle detection
CN103020948A (en) Night image characteristic extraction method in intelligent vehicle-mounted anti-collision pre-warning system
CN107886034B (en) Driving reminding method and device and vehicle
Prakash et al. Robust obstacle detection for advanced driver assistance systems using distortions of inverse perspective mapping of a monocular camera
Liu et al. A large-scale simulation dataset: Boost the detection accuracy for special weather conditions
Tran et al. Real-time traffic light detection using color density
Pradeep et al. An improved technique for night-time vehicle detection
CN104966064A (en) Pedestrian ahead distance measurement method based on visual sense
Boumediene et al. Vehicle detection algorithm based on horizontal/vertical edges
Phu et al. Traffic sign recognition system using feature points
CN113743226B (en) Daytime front car light language recognition and early warning method and system
CN112949595A (en) Improved pedestrian and vehicle safety distance detection algorithm based on YOLOv5
CN108073869A (en) A kind of system of scene cut and detection of obstacles
Nine et al. Traffic Light and Back-light Recognition using Deep Learning and Image Processing with Raspberry Pi
CN105550656A (en) Bayonet picture-based driver safety belt detection method
Shahbaz et al. The Evaluation of Cascade Object Detector in Recognizing Different Samples of Road Signs
Biswas et al. Night mode prohibitory traffic signs detection
Lin et al. A Vision-based Pedestrian Comity Pre-Warning System
CN114299414B (en) Vehicle red light running recognition and judgment method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20191015

CF01 Termination of patent right due to non-payment of annual fee