CN109800693B - Night vehicle detection method based on color channel mixing characteristics - Google Patents

Night vehicle detection method based on color channel mixing characteristics Download PDF

Info

Publication number
CN109800693B
CN109800693B CN201910015876.6A CN201910015876A CN109800693B CN 109800693 B CN109800693 B CN 109800693B CN 201910015876 A CN201910015876 A CN 201910015876A CN 109800693 B CN109800693 B CN 109800693B
Authority
CN
China
Prior art keywords
area
night
color channel
vehicle
tail
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910015876.6A
Other languages
Chinese (zh)
Other versions
CN109800693A (en
Inventor
乔瑞萍
董员臣
王方
张连超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN201910015876.6A priority Critical patent/CN109800693B/en
Publication of CN109800693A publication Critical patent/CN109800693A/en
Application granted granted Critical
Publication of CN109800693B publication Critical patent/CN109800693B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a night vehicle detection method based on color channel mixing characteristics, belongs to the field of auxiliary driving, and relates to a method for detecting vehicles at night, wherein the color channel mixing characteristics of an image are extracted by combining an RGB color space and an HSV color space, so that the dependence on an image sample library and the influence of threshold subjectivity are greatly reduced, and the universality of an algorithm is enhanced to adapt to various complex conditions appearing on a road at night; secondly, the color channel mixing characteristics are combined with the OTSU method, the image segmentation threshold is obtained in a self-adaptive mode, and meanwhile, the area which obviously does not accord with the characteristics of the tail lamp is removed by adopting a large-area segmentation algorithm, so that the accuracy of the algorithm is greatly improved; finally, the invention adopts three criteria to eliminate the impossible pairing of the tail lamps one by one, thereby having stronger robustness. The method is more suitable for application in the night background with complex illumination conditions in the overall effect.

Description

Night vehicle detection method based on color channel mixing characteristics
[ technical field ] A method for producing a semiconductor device
The invention belongs to the field of auxiliary driving, relates to a method for detecting vehicles at night, and particularly relates to a method for detecting vehicles at night based on color channel mixing characteristics.
[ background of the invention ]
In recent years, traffic safety accidents caused by insufficient illumination at night are more and more, and the detection of vehicles at night at home and abroad is slow due to various difficulties and does not obtain good results. The method is a very important link in an advanced driving assistance system aiming at vehicle detection in a night environment. For drivers to quickly and accurately grasp the road condition in front of the vehicle, there is an urgent need for safe driving.
Under the condition of good daytime lighting conditions, the characteristics of vehicle symmetry, vehicle bottom shadow, horizontal edge and the like are the most commonly used characteristics of vehicles. However, at night, the visibility of road scenes is low, image details are lacked, and the vehicle is difficult to be identified by adopting methods such as vehicle bottom shadows and edge features. Meanwhile, the night vehicle tail lamp has obvious characteristics, such as red color, high brightness, regular shape and good symmetry. Thus, in night vehicle identification, the positioning of the vehicle can be accomplished by means of the detection of the tail lamp.
The following three methods are mainly used for detecting vehicles at night: the method comprises a night vehicle detection algorithm based on color space channel threshold filtering, a night vehicle detection algorithm based on brightness and tail lamp forms and a night vehicle detection algorithm based on machine learning. The night vehicle detection algorithm based on color space channel threshold filtering is subdivided into an RGB color space and an HSV color space, the method is small in calculated amount and high in operation speed, the threshold is selected based on statistical analysis of a large number of images, the image sample library is selected with high subjectivity and one-sidedness, the threshold is large in selection range fluctuation, and the method is not high in universality. The night vehicle detection algorithm based on brightness and the tail lamp shape has good adaptability in color, dependence on a camera and the environment is reduced, false detection rate is reduced due to the fact that the tail lamp shape is more complex and more definite, detection omission is easily caused due to the fact that the shape of the tail lamp is uncertain, and instantaneity is difficult to guarantee. The night vehicle detection algorithm based on machine learning has the advantages of high accuracy and good effect, but the algorithm is complex, the real-time performance cannot be guaranteed, the driving scene is complex and changeable, the vehicles are various, and the establishment of a sample library is very difficult.
[ summary of the invention ]
The present invention is directed to overcoming the above-mentioned disadvantages of the prior art and providing a method for detecting vehicles at night based on color channel mixing characteristics. The two color spaces are combined, the color channel mixing characteristics of the obtained image are extracted, most noise light source interferences under the complex road background are eliminated, the influences of an image sample library, the threshold value selection subjectivity and the like are greatly reduced, meanwhile, the real-time performance of an algorithm can be ensured, and the accuracy and the real-time performance of vehicle detection at night are greatly improved.
In order to achieve the purpose, the invention adopts the following technical scheme to realize the purpose:
a night vehicle detection method based on color channel mixing characteristics comprises the following steps:
step 1, collecting and preprocessing an image of the tail of a vehicle, and defining an interested area;
step 2, extracting color channel mixing characteristics in the region of interest to obtain a characteristic diagram;
step 3, performing threshold segmentation on the feature map through an OTSU self-adaptive threshold segmentation algorithm to obtain a candidate image of the tail lamp of the night vehicle;
step 4, dividing the area, and removing the area which does not accord with the features of the tail lamp;
and 5, carrying out tail lamp pairing, estimating the width and the height of the vehicle according to the width and the height of the pair of tail lamps, and positioning the position of the vehicle.
The invention further improves the following steps:
the step 1 of preprocessing and defining the region of interest specifically comprises the following steps: and combining the vision and the driving video, and setting the lower two-thirds area of the vehicle tail image in the vertical direction from top to bottom as an interested area.
Extracting color channel mixing characteristics in the region of interest in the step 2, specifically: extracting two color channels of R and G from an RGB color space in the region of interest, converting the region of interest into an HSV color space and extracting a V channel, and performing (R-G) multiplied by V algebraic operation on R, G and V channels to obtain color channel mixing characteristics.
The specific steps of the step 3 are as follows:
step 3-1, marking the characteristic image as I (x, y), defining T as a segmentation threshold of the foreground and the background, and setting the image size as M multiplied by N;
step 3-2, counting the number of pixels with the gray value of the pixel being greater than T in the characteristic graph, and recording the number as N0The number of pixels with gray value less than T is recorded as N1
Step 3-3, the ratio of the number of foreground pixels is recorded as omega0The average gray scale is recorded as mu0(ii) a The number of background pixels is counted as omega1The average gray scale is recorded as mu1
Figure BDA0001939028790000032
Figure BDA0001939028790000031
Step 3-4, recording the total average gray level of the image as mu, recording the inter-class variance as g:
μ=ω0×μ01×μ1
g=ω0×(μ0-μ)21×(μ1-μ)2
step 3-5, traversing T from 0 to 255 in sequence, finding the T which maximizes g, and recording as threshold;
step 3-6, the maximum value Graymax of the Gray scale in the I (x, y) is obtained through statistics, and then the segmentation threshold value of the feature map is L (threshold x Gray)max
And 3-7, traversing each pixel point I in the feature map, wherein if I is larger than L, the pixel point I is a foreground if I is 255, and otherwise, the pixel point I is a background if I is 0.
The specific method for segmenting the region in the step 4 is as follows:
step 4-1, carrying out connected domain calibration on the candidate images of the tail lamps of the night vehicle, and counting the area of each connected domain;
step 4-2, traversing all connected domains, and if the area of each connected domain is less than 1500 pixels, not operating; if the area of the connected domain is larger than 1500 pixels, calculating the average gray value of the connected domain, and dividing the connected domain twice according to the average gray value.
And 4, removing the area which does not accord with the features of the tail lamp, specifically: and (4) carrying out statistical analysis on the image after the region is segmented in the step 4, and removing the region of the tail lamp region, the area of which is not in the range of 85-1150 pixels.
And (5) performing statistical analysis on more than or equal to 500 candidate images of the tail lamp of the night vehicle.
And step 5, carrying out tail lamp pairing, specifically:
step 5-1, defining a normalized area difference d between the candidate regions p, qarea
Figure BDA0001939028790000041
Wherein, areapAnd areaqThe areas of the candidate regions p, q, respectively;
step 5-2, defining a normalized height difference d between the candidate regions p, qheight
Figure BDA0001939028790000042
Wherein y ispAnd yqThe ordinate, x, of the center point of the candidate regions p, q, respectivelypAnd xqRespectively are the horizontal coordinates of the center points of the candidate regions p and q;
step 5-3, defining the height-width ratio d of the combined frame of the tail lamp pairpair
Figure BDA0001939028790000043
Wherein wp,hp,wq,hqWidth and height of rectangular frames surrounding the candidate regions p, q, respectively;
step 5-4, setting darea、dheightThreshold value of (d)pairIf the area and height between two tail light regions are larger than the threshold value or the combo box aspect ratio is not within the threshold value range, the two tail lights cannot be paired with tail lights, and conversely, the tail lights are paired.
dareaIs set to 0.15, dheightIs set to 0.1, dpairThe threshold value of (2) is in the range of 0.2 to 0.4.
The step 5 of positioning the vehicle position specifically comprises the following steps: let the height of the tail lamp pair combo frame be denoted by h, the tail lamp pair pitch be denoted by d, and the area with the tail lamp pair as the center, the vehicle width being 1.2d, and the vehicle height being 3.2h, is the vehicle position.
Compared with the prior art, the invention has the following beneficial effects:
the method combines the RGB color space and the HSV color space at the same time, extracts the color channel mixing characteristics of the obtained image, greatly reduces the dependence on an image sample library and the influence of threshold subjectivity, and is favorable for enhancing the universality of the algorithm so as to adapt to various complex conditions appearing on a road at night; secondly, the color channel mixing characteristics are combined with the OTSU method, the image segmentation threshold is obtained in a self-adaptive mode, and meanwhile, the area which obviously does not accord with the characteristics of the tail lamp is removed by adopting a large-area segmentation algorithm, so that the accuracy of the algorithm is greatly improved; finally, the invention adopts three criteria to eliminate the impossible pairing of the tail lamps one by one, thereby having stronger robustness. The method is more suitable for application in the night background with complex illumination conditions in the overall effect.
[ description of the drawings ]
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic view of region of interest determination of the present invention;
FIG. 3 is a color channel mixing profile of the present invention;
FIG. 4 is a flow chart of the large area segmentation algorithm of the present invention;
FIG. 5 is a schematic view of the present invention for determining vehicle position via a pair of tail lights;
FIG. 6 is a road map of the present invention;
FIG. 7 is a feature diagram obtained by extracting color channel mixture features according to the present invention;
FIG. 8 is a graph of adaptive threshold filtering and artifact removal according to the present invention;
FIG. 9 is a diagram showing the result of the pairing of the rear lights according to the present invention;
FIG. 10 is a map of the positioning of a vehicle according to the present invention on an original image.
[ detailed description ] embodiments
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments, and are not intended to limit the scope of the present disclosure. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Various structural schematics according to the disclosed embodiments of the invention are shown in the drawings. The figures are not drawn to scale, wherein certain details are exaggerated and possibly omitted for clarity of presentation. The shapes of various regions, layers and their relative sizes and positional relationships shown in the drawings are merely exemplary, and deviations may occur in practice due to manufacturing tolerances or technical limitations, and a person skilled in the art may additionally design regions/layers having different shapes, sizes, relative positions, according to actual needs.
In the context of the present disclosure, when a layer/element is referred to as being "on" another layer/element, it can be directly on the other layer/element or intervening layers/elements may be present. In addition, if a layer/element is "on" another layer/element in one orientation, then that layer/element may be "under" the other layer/element when the orientation is reversed.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The invention is described in further detail below with reference to the accompanying drawings:
the invention relates to a night vehicle detection method based on color channel mixing characteristics, which specifically comprises the following steps as shown in figure 1:
1) region of interest determination
According to the visual perception of people and the analysis of the driving video, a sky area of an image, namely, a vertical one-third area from top to bottom is set as a non-detection area, and the remaining two-thirds area is an interested area detected by a system, as shown in fig. 2.
2) Extracting color channel mixture features
The color channel mixing features of the image are extracted to distinguish the tail lights from other noisy light sources, as shown in fig. 3.
The method comprises the following specific steps:
the first step, in RGB color space, R, G two color channels of the image are extracted;
secondly, converting the image into an HSV color space, and extracting a V channel of the image;
and thirdly, performing algebraic operation (R-G) multiplied by V on the extracted three color channels to obtain color channel mixing characteristics of the image.
3) Threshold filtering
And (3) segmenting the image by combining an OTSU method and using an adaptive threshold segmentation algorithm based on (R-G) x V color channel mixed characteristics to obtain a candidate region of the tail lamp of the night vehicle. The method comprises the following specific steps:
step one, marking T as a segmentation threshold value of a foreground and a background for an extracted characteristic image I (x, y), wherein the size of the image is MXN;
secondly, counting the number of pixels with the gray value of the pixel being greater than the threshold value T in the characteristic graph, and recording the number as N0The number of pixels with the gray value less than the threshold T is recorded as N1
Thirdly, calculating the proportion of the number of foreground pixels in the whole image, and recording as omega0The average gray scale is recorded as mu0(ii) a Calculating the proportion of the number of background pixels in the whole image, and recording as omega1The average gray scale is recorded as mu1
Figure BDA0001939028790000081
Figure BDA0001939028790000082
Fourthly, recording the total average gray level of the image as mu and the inter-class variance as g;
μ=ω0×μ01×μ1 (3)
g=ω0×(μ0-μ)21×(μ1-μ)2 (4)
step five, traversing T from 0 to 255 in sequence, finding a threshold T which enables the inter-class variance g to be maximum, and marking as threshold;
sixthly, counting to obtain the maximum Gray value Gray in the (R-G) multiplied by V characteristic diagram I (x, y)maxIf the segmentation threshold of the whole image is L ═ threshold × Graymax
And seventhly, traversing each pixel point I in the (R-G) multiplied by V characteristic diagram, wherein if I is larger than L, the foreground is represented by I255, and otherwise, the background is represented by I0.
4) Counterfeit removal
Through statistical analysis of 500 night vehicle images, the area size of the tail lamp area is concentrated in the range from 85 pixels to 1150 pixels, areas which are not in the threshold range and obviously do not accord with the characteristics of the tail lamp are removed, and false tail lamps are eliminated.
For the possible tail lamp adhesion problem or the influence of ambient light, a large-area region segmentation algorithm, such as fig. 4, is used for carrying out secondary segmentation, and a large block part is divided into a plurality of sub-blocks, so that the subsequent detection precision is improved. The method comprises the following specific steps:
firstly, calibrating connected domains of the segmented characteristic diagram, and counting the area of each connected domain;
step two, traversing all connected domains, and if the area of each connected domain is less than 1500 pixels, not operating; if the area of the connected domain is larger than 1500 pixels, calculating the average gray value of the connected domain, and secondarily dividing the connected domain by taking the average gray value as a division threshold.
5) Tail lamp pairing and vehicle positioning
A pair of taillights may identify a vehicle, so the pairing process is used to identify all possible taillight pairs in the image, and thus all possible vehicles. The method is characterized in that the impossible tail lamp pairing is eliminated one by utilizing three main taillamp pairing rules, and the specific method is as follows:
first, a normalized area difference d between candidate regions p, q is definedarea
Figure BDA0001939028790000091
Wherein, areapAnd areaqThe areas of the candidate regions p, q, respectively. If the area difference between the two candidate tail lamp regions is larger than the threshold value dareaThey do not form a group of taillight pairs;
second, a normalized height difference d between the candidate regions p, q is definedheight
Figure BDA0001939028790000092
Wherein, ypAnd yqThe ordinate, x, of the center point of the candidate region p, q, respectivelypAnd xqRespectively, the abscissa of the center point of the candidate region p, q. If the difference in height between two candidate taillight regions is greater than a threshold, they do not constitute a group of taillight pairs;
thirdly, defining the height-width ratio d of the combined frame of the tail lamp pairpair
Figure BDA0001939028790000093
Wherein (x)p,yp),(xq,yq) Horizontal and vertical coordinates (w) of the center points of the candidate regions p and q, respectivelyp,hp),(wq,hq) The width and height of the rectangular frame surrounding the candidate regions p, q, respectively. If the height-width ratio of the combo box of the tail light pair composed of the two candidate tail light areas is not within the threshold value range, they do not constitute a group of tail light pairs.
As shown in Table 1, the statistics of analysis of 500 images, d in this textare,、dheight、dpairAre respectively 0.15, 0.1, [0.2, 0.4%]。
Table 1 statistical table of threshold range for pairing tail lamps
Figure BDA0001939028790000101
And (4) obtaining a final tail lamp pair combination frame through the exclusion of three criteria. The height of the tail lamp pair combo frame is denoted by h, the distance between the tail lamp pairs is denoted by d, and the area with the tail lamp pair as the center, the vehicle width of 1.2d and the vehicle height of 3.2h is the final vehicle position, as shown in fig. 5.
Through the above steps, the location of the vehicle at night is obtained, and examples are shown in fig. 6-10. The real shooting night road scene graph is shown in fig. 6; extracting color channel mixing features from the original image to obtain a feature map, as shown in fig. 7; after the feature map is subjected to adaptive threshold filtering and false removing, a tail lamp candidate region is obtained, as shown in fig. 8; a tail lamp pairing result graph is obtained after the three criteria are eliminated, and is shown in fig. 9; the position of the vehicle on the original image is finally determined from the tail light pairing result, as shown in fig. 10.
The above-mentioned contents are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modification made on the basis of the technical idea of the present invention falls within the protection scope of the claims of the present invention.

Claims (9)

1. A night vehicle detection method based on color channel mixing characteristics is characterized by comprising the following steps:
step 1, collecting and preprocessing an image of the tail of a vehicle, and defining an interested area;
step 2, extracting color channel mixing characteristics in the region of interest to obtain a characteristic diagram; the specific method comprises the following steps:
extracting two color channels R and G from an RGB color space in the region of interest, converting the region of interest into an HSV color space and extracting a V channel, and performing (R-G) multiplied by V algebraic operation on R, G and V channels to obtain color channel mixing characteristics;
step 3, performing threshold segmentation on the feature map through an OTSU self-adaptive threshold segmentation algorithm to obtain a candidate image of the tail lamp of the night vehicle;
step 4, dividing the area, and removing the area which does not accord with the features of the tail lamp;
and 5, carrying out tail lamp pairing, estimating the width and the height of the vehicle according to the width and the height of the pair of tail lamps, and positioning the position of the vehicle.
2. The night vehicle detection method based on the color channel mixing feature as claimed in claim 1, wherein the step 1 of preprocessing to define the region of interest specifically comprises: and combining the vision and the driving video, and setting the lower two-thirds area of the vehicle tail image in the vertical direction from top to bottom as an interested area.
3. The night vehicle detection method based on the color channel mixing characteristic as claimed in claim 1, wherein the specific steps of the step 3 are as follows:
step 3-1, marking the characteristic image as I (x, y), defining T as a segmentation threshold of the foreground and the background, and setting the image size as M multiplied by N;
step 3-2, counting the number of pixels with the gray value of the pixel being greater than T in the characteristic graph, and recording the number as N0The number of pixels with gray value less than T is recorded as N1
Step 3-3, the ratio of the number of foreground pixels is recorded as omega0The average gray scale is recorded as mu0(ii) a The number of background pixels is counted as omega1The average gray scale is recorded as mu1
Figure FDA0002800791790000021
Figure FDA0002800791790000022
Step 3-4, recording the total average gray level of the image as mu, recording the inter-class variance as g:
μ=ω0×μ01×μ1
g=ω0×(μ0-μ)21×(μ1-μ)2
step 3-5, traversing T from 0 to 255 in sequence, finding the T which maximizes g, and recording as threshold;
step 3-6, counting to obtain the maximum Gray value Gray in I (x, y)maxIf the segmentation threshold of the feature map is L ═ threshold × Graymax
And 3-7, traversing each pixel point I in the feature map, wherein if I is larger than L, the pixel point I is a foreground if I is 255, and otherwise, the pixel point I is a background if I is 0.
4. The night vehicle detection method based on color channel mixing characteristics as claimed in claim 1, wherein the specific method for dividing the region in step 4 is as follows:
step 4-1, carrying out connected domain calibration on the candidate images of the tail lamps of the night vehicle, and counting the area of each connected domain;
step 4-2, traversing all connected domains, and if the area of each connected domain is less than 1500 pixels, not operating; if the area of the connected domain is larger than 1500 pixels, calculating the average gray value of the connected domain, and dividing the connected domain twice according to the average gray value.
5. The method for detecting vehicles at night based on color channel mixing characteristics as claimed in claim 1 or 4, wherein the step 4 of removing the regions not conforming to the taillight characteristics comprises: and (4) carrying out statistical analysis on the image after the region is segmented in the step 4, and removing the region of the tail lamp region, the area of which is not in the range of 85-1150 pixels.
6. The method of claim 5, wherein the night vehicle detection method based on color channel mixture features comprises statistically analyzing at least 500 candidate images of the night vehicle tail light.
7. The method for detecting vehicles at night based on color channel mixing characteristics as claimed in claim 1, wherein the tail lamp pairing is performed in step 5, specifically:
step 5-1, defining a normalized area difference d between the candidate regions p, qarea
Figure FDA0002800791790000031
Wherein, areapAnd areaqThe areas of the candidate regions p, q, respectively;
step 5-2, defining a normalized height difference d between the candidate regions p, qheight
Figure FDA0002800791790000032
Wherein y ispAnd yqThe ordinate, x, of the center point of the candidate regions p, q, respectivelypAnd xqRespectively are the horizontal coordinates of the center points of the candidate regions p and q;
step 5-3, defining the height-width ratio d of the combined frame of the tail lamp pairpair
Figure FDA0002800791790000033
Wherein wp,hp,wq,hqWidth and height of rectangular frames surrounding the candidate regions p, q, respectively;
step 5-4, setting darea、dheightThreshold value of (d)pairIf two, the threshold range ofIf the area and height between the tail light regions are larger than the threshold value or the height-to-width ratio of the combo box is not within the threshold value range, the two tail lights cannot be paired, and the tail lights are paired instead.
8. The method for night vehicle detection based on color channel blending feature of claim 7, wherein d isareaIs set to 0.15, dheightIs set to 0.1, dpairThe threshold value of (2) is in the range of 0.2 to 0.4.
9. The method for detecting vehicles at night based on color channel mixing characteristics as claimed in claim 7, wherein the step 5 of locating the vehicle position specifically comprises: let the height of the tail lamp pair combo frame be denoted by h, the tail lamp pair pitch be denoted by d, and the area with the tail lamp pair as the center, the vehicle width being 1.2d, and the vehicle height being 3.2h, is the vehicle position.
CN201910015876.6A 2019-01-08 2019-01-08 Night vehicle detection method based on color channel mixing characteristics Active CN109800693B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910015876.6A CN109800693B (en) 2019-01-08 2019-01-08 Night vehicle detection method based on color channel mixing characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910015876.6A CN109800693B (en) 2019-01-08 2019-01-08 Night vehicle detection method based on color channel mixing characteristics

Publications (2)

Publication Number Publication Date
CN109800693A CN109800693A (en) 2019-05-24
CN109800693B true CN109800693B (en) 2021-05-28

Family

ID=66558703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910015876.6A Active CN109800693B (en) 2019-01-08 2019-01-08 Night vehicle detection method based on color channel mixing characteristics

Country Status (1)

Country Link
CN (1) CN109800693B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110688907B (en) * 2019-09-04 2024-01-23 火丁智能照明(广东)有限公司 Method and device for identifying object based on night road light source
DE102021129832A1 (en) 2021-11-16 2023-05-17 Connaught Electronics Ltd. Vehicle detection using a computer vision algorithm

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101436252A (en) * 2008-12-22 2009-05-20 北京中星微电子有限公司 Method and system for recognizing vehicle body color in vehicle video image
CN103150898A (en) * 2013-01-25 2013-06-12 大唐移动通信设备有限公司 Method and device for detection of vehicle at night and method and device for tracking of vehicle at night
CN103150904A (en) * 2013-02-05 2013-06-12 中山大学 Bayonet vehicle image identification method based on image features
CN106407951A (en) * 2016-09-30 2017-02-15 西安理工大学 Monocular vision-based nighttime front vehicle detection method
CN107992810A (en) * 2017-11-24 2018-05-04 智车优行科技(北京)有限公司 Vehicle identification method and device, electronic equipment, computer program and storage medium
CN108564631A (en) * 2018-04-03 2018-09-21 上海理工大学 Car light light guide acetes chinensis method, apparatus and computer readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080166018A1 (en) * 2007-01-05 2008-07-10 Motorola, Inc. Method and apparatus for performing object recognition on a target detected using motion information

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101436252A (en) * 2008-12-22 2009-05-20 北京中星微电子有限公司 Method and system for recognizing vehicle body color in vehicle video image
CN103150898A (en) * 2013-01-25 2013-06-12 大唐移动通信设备有限公司 Method and device for detection of vehicle at night and method and device for tracking of vehicle at night
CN103150904A (en) * 2013-02-05 2013-06-12 中山大学 Bayonet vehicle image identification method based on image features
CN106407951A (en) * 2016-09-30 2017-02-15 西安理工大学 Monocular vision-based nighttime front vehicle detection method
CN107992810A (en) * 2017-11-24 2018-05-04 智车优行科技(北京)有限公司 Vehicle identification method and device, electronic equipment, computer program and storage medium
CN108564631A (en) * 2018-04-03 2018-09-21 上海理工大学 Car light light guide acetes chinensis method, apparatus and computer readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于单目视觉的夜间车辆识别方法;杜滕州,曹凯;《计算机工程与应用》;20130111;第50卷(第17期);第160-163、172页 *
车辆尾灯灯语的检测与识别;田强等;《计算机系统应用》;20151231;第24卷(第11期);第213-216页 *

Also Published As

Publication number Publication date
CN109800693A (en) 2019-05-24

Similar Documents

Publication Publication Date Title
CN109886896B (en) Blue license plate segmentation and correction method
CN110069986B (en) Traffic signal lamp identification method and system based on hybrid model
CN103971128B (en) A kind of traffic sign recognition method towards automatic driving car
CN109299674B (en) Tunnel illegal lane change detection method based on car lamp
CN103824081B (en) Method for detecting rapid robustness traffic signs on outdoor bad illumination condition
CN107891808B (en) Driving reminding method and device and vehicle
US8019157B2 (en) Method of vehicle segmentation and counting for nighttime video frames
CN105160691A (en) Color histogram based vehicle body color identification method
CN110688907B (en) Method and device for identifying object based on night road light source
CN104050450A (en) Vehicle license plate recognition method based on video
CN107016362B (en) Vehicle weight recognition method and system based on vehicle front windshield pasted mark
CN103324935B (en) Vehicle is carried out the method and system of location and region segmentation by a kind of image
JP2011216051A (en) Program and device for discriminating traffic light
CN102938057B (en) A kind of method for eliminating vehicle shadow and device
CN110084111B (en) Rapid night vehicle detection method applied to self-adaptive high beam
CN103927548B (en) Novel vehicle collision avoiding brake behavior detection method
CN107895151A (en) Method for detecting lane lines based on machine vision under a kind of high light conditions
CN101369312B (en) Method and equipment for detecting intersection in image
CN107563301A (en) Red signal detection method based on image processing techniques
CN105046218A (en) Multi-feature traffic video smoke detection method based on serial parallel processing
CN109800693B (en) Night vehicle detection method based on color channel mixing characteristics
CN106407951A (en) Monocular vision-based nighttime front vehicle detection method
CN111144301A (en) Road pavement defect quick early warning device based on degree of depth learning
CN111401364A (en) License plate positioning algorithm based on combination of color features and template matching
CN107507140A (en) The outdoor scene vehicle shadow disturbance restraining method of highway of feature based fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant