US8019157B2 - Method of vehicle segmentation and counting for nighttime video frames - Google Patents
Method of vehicle segmentation and counting for nighttime video frames Download PDFInfo
- Publication number
- US8019157B2 US8019157B2 US12/248,054 US24805408A US8019157B2 US 8019157 B2 US8019157 B2 US 8019157B2 US 24805408 A US24805408 A US 24805408A US 8019157 B2 US8019157 B2 US 8019157B2
- Authority
- US
- United States
- Prior art keywords
- region
- mask
- video frames
- headlight
- background
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/215—Motion-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/584—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30236—Traffic on road, railway or crossing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30242—Counting objects in image
Definitions
- the present invention relates to a method of vehicle segmentation and counting for night video frames, and more particularly, to a method of vehicle segmentation and counting utilizing the property of color variation and headlight information combining change detection in nighttime traffic environment.
- Video object segmentation additionally considers the temporal information so it can process moving objects from video sequences.
- indoor situation is more extensively discussed than the outdoor condition.
- the video surveillance system is the most common application in multimedia video, and it is unrealistic to deal only with the indoor condition.
- Outdoor circumstances can also be separated into daytime and nighttime conditions. Nighttime occupies almost half a day so that the video object segmentation in nighttime should be as important as daytime segmentation while most of the reported methods focus on daytime methods.
- Some methods handling the night surveillance sequence focus only on cars by processing the headlight pair information, and then exclude other regions in difference frame. Due to the high brightness of headlight, it is easy to get the headlight information. However, if the illumination on the ground appears to be a lamp because of the over-bright headlight, it will be detected as two cars (or bikes) while in fact there is only one. Furthermore, the above problem can't be overcome because it has lost the information of object body.
- Some methods use the far-infrared images to detect objects by measuring the thermal radiation. It can classify cars and pedestrians, but it may fail when the shape of object is asymmetric. Besides, it only uses the static image information but don't exploit the temporal information so it can not be employed to accomplish vehicle counting in traffic scenes.
- Shadow effect Another problem in nighttime outdoor segmentation is the shadow effect.
- Methods in the prior art deal with shadow condition in daytime and obtain the satisfied result.
- most of the shadow detection methods focus on daytime environment without considering the following issues.
- the present invention provides a method of vehicle segmentation and counting for night video frames, comprising: generating a background frame according the video frames; converting the video frames from RGB data to HSI data; converting the background frame from RGB data to HSI data; inputting the HSI data of the video frames and the HSI data of the background frame to a Gaussian filter so as to generate Gaussian data of the video frames and Gaussian data of the background frame; updating the Gaussian data of the background frame; determining changed regions and unchanged regions of the video frames to generate a frame difference mask and a background subtraction mask according to the Gaussian data of the video frames and the Gaussian data of the background frame; detecting an object region of the video frames to generate a static region mask and a moving region mask according to the frame difference mask and the background subtraction mask; combining the static region mask and the moving region mask to generate an initial object mask; detecting characteristic information of the moving object to calculate the amount of the moving objects; excluding an interference region of the moving object and compensating the object region of
- FIG. 1 is a block diagram of the present invention.
- FIG. 2 is a diagram of the object moving region.
- FIG. 3 is a block diagram of histogram-based change detection.
- FIG. 4 is a block diagram of the background updating method.
- FIG. 5( a ) is a diagram illustrating a headlight pair.
- FIG. 5( b ) is a diagram illustrating a headlight pair and two bike headlights.
- FIG. 6 is a diagram illustrating another searching region of two headlight points.
- FIG. 7 is a diagram illustrating a bike structure and the four line segments defined by the present invention.
- FIG. 8( a ) is a diagram illustrating the compensation region of vehicle.
- FIG. 8( b ) is a diagram illustrating the compensation region of bike.
- FIG. 9( a ) is a diagram illustrating shadow prediction region of vehicle.
- FIG. 9( b ) is a diagram illustrating shadow prediction region of bike.
- FIG. 10( a ) is a diagram illustrating counting region and counting line.
- FIG. 10( b ) is a diagram illustrating prediction region of headlight of car.
- FIGS. 11 , 12 , and 13 show the error reduction rates for Dong-men bridge 03, Dong-men bridge 06 and Chang-rong bridge 02 sequences, respectively.
- the computational burden of the segmentation method using motion estimation will be over half of the system operations.
- watershed segmentation of spatial methods is often used to compensate the deficiency of temporal methods.
- the spatial segmentation must process frame successively, and the time-cost of the watershed method is also high. So the method combining spatial and temporal information will greatly lower the efficiency and is not suitable for real-time applications.
- the present invention modified the change detection based video segmentation method for daytime environments to be suitable for nighttime environments.
- the modification is described below.
- the HSI transform is used to get the color information for segmentation.
- the present invention uses the first 100 frames to estimate the initial background instead of the first frame.
- change detection the present invention can obtain the initial object mask.
- the present invention proposes a method using the color information and a concept of variation ratio to detect and exclude the ground-illumination and shadows following vehicles.
- headlight detection the present invention can obtain the headlight information for object region compensation.
- FIG. 1 shows the diagram of the present invention.
- the present invention assumes that the background of video source is static and the camera is fixed because it is normal in applications like surveillance and traffic flow counting systems.
- the concepts in the proposed segmentation method are briefly introduced below, and the details will be described in the next chapter.
- the present invention uses color space transformation to transform RGB (Red, Green, Blue) into HSI (Hue, Saturation, Intensity).
- RGB Red, Green, Blue
- HSI Hue, Saturation, Intensity
- Change detection model is applied to obtain the changed and unchanged regions.
- the input is the difference frame of inter-frames or background subtraction.
- histogram analysis and pixel classification are achieved to identify every pixel whether it changes or not.
- the present invention employs the background estimation to obtain the background frame instead of the first frame of sequence to adapt the condition of moving objects in the first frame.
- the present invention applies object region detection combining the output of change detection to obtain the conditions of object region including moving region, still region, background region and uncovered background region, and then the initial object mask is obtained by combining the moving and still regions.
- the present invention proposes a concept of color variation ratio and uses the color information in background region to detect possible ground-illumination pixels and then remove them from initial object mask.
- the present invention proposes a method classifying the car and bike headlights, and then for each vehicle the present invention uses the headlight(s) to decide a vehicle region. Afterwards, the present invention supposes pixels inside the region should belong to real object body so that pixels detected as ground-illumination within the area will be compensated back to object mask.
- the present invention use the headlight information to predict possible region of shadow, afterward the present invention employs the concept similar to ground-illumination detection to classify shadow pixels. Finally pixels of shadow are removed from object mask to acquire a more accurate object mask.
- the present invention uses a counting line and a prediction region to implement the counting. Every vehicle passing through the counting line is counted.
- the present invention employs change detection to obtain the difference of inter-frames or between current and background frames. After that, the statistic method is used to estimate values of standard deviation and mean for the following pixel classification. Background region, uncovered background region, still region and moving region are then detected and finally by combining them the initial object mask is formed.
- Human vision can differentiate moving objects from background in dark environment by the difference of color and brightness. Transformation is usually utilized to separate the intensity component from color components.
- the HSI model is the most appropriate to describe the human vision to color information.
- the hue (H) component naturally show the property of color
- the saturation (S) component measures the level of adding white light to pure color.
- the intensity (I) component shows the degree of brightness.
- the present invention uses Eq. (1) to transform RGB components into HSI components.
- the present invention uses the intensity information for moving object detection.
- the hue and saturation components are used for ground-illumination detection and shadow detection. Because the present invention uses the intensity frame and the change detection method to obtain the motion information, if a background model exists, the accuracy of background-foreground separation will be raised.
- the present invention exploits the first 100 frames of the video sequence and records the values of intensity for each pixel with the same coordinate. The mean value of each point is estimated to form the initial background of intensity, as shown in equation (2).
- Gaussian smoothing filter in two dimensions is given by:
- g ⁇ ( i , j ) c ⁇ e ( i 2 + j 2 ) 2 ⁇ ⁇ ⁇ 2 ( 3 )
- c is a constant which decides the amplitude of Gaussian filter and ⁇ controls the degree of smoothing image.
- the difference frame which is the difference between two successive frames is often utilized in change detection based segmentation methods.
- a difference frame includes two parts: foreground and background. The values of pixels in the foreground region are higher than those in the background region. The foreground area corresponds to the moving region while the background area corresponds to the non-moving region. Sometimes the difference value in the background is high due to environment effects, e.g., change of illumination or noises. This will make the pixels in the background be mistakenly classified as foreground in the change detection. Moreover, the module of background subtraction is performed by differentiating the current frame and the background frame. The result is very similar to the difference frame.
- the goal of change detection is to separate the difference frame into the changed and unchanged regions by a threshold obtained from background estimation.
- the present invention employs the histogram-based change detection constructed from the difference frame.
- the block diagram of histogram-based change detection is shown in FIG. 3 .
- the basic idea is to analyze the gray-level or color distributions for exploiting the characteristics of the difference frame. Moreover the peak of histogram gives information about the background region.
- the present invention chooses the gray-level p′ which has the maximum number His(p′) of pixels, and then uses these two values to estimate the background model.
- the random noise occurs between inter-frames, thus the background part of the difference frame can be regarded as Gaussian distribution.
- the probability density function of Gaussian for the background region can be modeled as:
- ⁇ b and ⁇ b are the mean and standard deviation of the background region, respectively.
- ⁇ i and std i will be calculated for each point of gray-level p′within a window of size N*N.
- the values of global mean ⁇ b and standard deviation ⁇ b are estimated by:
- the estimated values can be used to classify the pixel in the difference frame into unchanged region (gray “0” denoted) and the changed region (gray “255” denoted), as described by:
- DF(x,y) is the pixel value at coordinate (x,y) of the difference frame
- k is the constant which depends on video content. In the present invention, this constant is set from 10 to 20 for dark environments.
- the object region detection is used to detect the object region in our method.
- Table 1 lists the corresponding values of ON/OFF (changed/unchanged) for each pixel in the frame difference mask and the background subtraction mask for four types of regions. The corresponding regions indicated in the table are illustrated in FIG. 2 in which an object moves from the left side to the right side.
- FIG. 4 shows the block diagram of the background updating where Gau_c and Gau_p denote the current and previous frames after Gaussian smoothing, respectively.
- a counter count(x,y) counts the times of successive changes of inter-frames. For the same coordinate (x,y), if the gray level difference between Gau_c and Gau_p is smaller than the threshold Th diff , it is classified as unchanged and count(x,y) will be added by one. Oppositely, if the difference value is bigger than Th diff , count(x,y) will be reset to zero. When the value of count(x,y) equals to the constant t count , the value of the coordinate (x,y) of Gau_c will be updated to the same position of Gau_back with count(x,y) being reset to zero, as shown in Eq. (8), where Gau_c(x, y), Gau_p(x, y) and Gau_back(x, y) are the values of Gau_c, Gau_p and Gau_back at the coordinate (x,y).
- the present invention obtains the initial object mask OM initial by combining the moving region and still region. Not only the vehicles but also the ground-illumination and shadow are segmented into the initial object mask. This error region of object detection will greatly reduce the accuracy of segmentation and hence should be removed.
- streetlamps are set along the road and headlights of vehicles are turned on by drivers.
- the color of streetlamp is either yellow or white and so is the vehicle headlight.
- Yellow streetlamps make the ground look yellow and white streetlamps bluish (if there are trees near the streetlamps, it may be greenish due to reflection of leaves).
- ground-illumination looks yellow because of yellow headlight and bluish (or white if it is too bright) if white.
- the present invention can roughly divide the background into yellow and white streetlamp situations by estimating the average values of R, G and B components of the background region in the initial object mask, as shown in Eq. (9).
- Ground-illumination belongs to foreground area and has two color situations similar to streetlamps.
- Table 2 displays four cases of ground-illumination in different backgrounds. The effect of headlight is more apparent than streetlamp, therefore the color of illumination appears to be like headlight.
- the present invention defines three ratios R ratio , G ratio and B ratio in Eq. (11). These values represent the level of variation in each color channel.
- R ratio R current /R back
- G ratio G current /G back
- B ratio B current /B back R current , G current , B current : value of current frame
- R back G back
- B back value of background frame
- Equation (12) gives the conditions of ground-illumination. if( I current >I back and R current >G current >B current and B ratio >R ratio and B ratio >G ratio ) Ground-illumination (12) where I current and I back are the intensity values of the current and background frames.
- the present invention just deals with the situations of yellow-streetlamp/yellow-headlight and yellow-streetlamp/white-headlight. Similarly, the present invention handles only the situations of white-streetlamp/yellow-headlight and white-streetlamp/white-headlight for blue background. After detection, the pixels belong to ground-illumination will be removed from the initial object mask and the object mask OM g is then obtained.
- the present invention proposes a method to reclassify the object pixels to the object mask. The details are described in following subsections.
- the present invention proposes a method of headlight detection and then utilizes the information to do the classification of car and bike.
- the initial object mask is used to obtain the maximum and minimum values of gray level Max gray and Min gray in the object region.
- the intensity value of white headlight approaches 255.
- the present invention takes an extremely large value to be the R component.
- the present invention gives a tolerable range of two times of Gr.
- the present invention gives the B component with an extremely large value and the intensity has a range of Gr.
- Eq. (17) gives the condition of possible headlight detection. if( R >(Max gray ⁇ G r ) and (Max gray ⁇ 2 *G r ) ⁇ I ⁇ Max gray ) Yellow Headlight if( B >(Max gray ⁇ G r ) and (Max gray ⁇ G r ) ⁇ I ⁇ Max gray )
- I means the intensity value of the object region.
- the present invention can acquire the approximate headlight information.
- the ground-illumination may lead to false detection because the headlight is too bright.
- the present invention can obtain the headlight information, the present invention still has to classify the headlights into cars, bikes and errors.
- Each car has a pair of headlights while each bike has only one. Moreover, the width/height ratio of a car is different from that of a bike. In our method, the present invention pays more attention to cars than to bikes because the former is the major part of traffic flow.
- the present invention For each mass of possible headlight in the headlight mask, the present invention uses the center pixel to represent it. As shown in FIG. 5 , for every two headlight pixels the present invention obtains the distances x_diff and y_diff in both coordinate directions of x and y and then the width/height ratio is obtained by:
- c is a positive integer since the pair of headlights appears to be horizontal. In our method, c is set to 5.
- the numbers of these two headlight masses should be similar and pixels on the line segment between the two points should all belong to object region.
- the present invention can obtain the condition of car headlights: if(ratio w/h >5 and num l,r >th l,r and 0.5 ⁇ ratio num ⁇ 2) Headlight pair of car (20)
- num l,r denotes the number of pixels, belonging to the initial object mask, on the line segment lr of the two headlight points l and r.
- th l,r is the threshold number and ratio num is the ratio of the pixel numbers of the two headlight masses.
- th l,r is set to 0.8* lr .
- the condition num l,r >th l,r is used to avoid mistakenly classifying two bike headlights as a pair of car headlights.
- FIG. 5( b ) shows the different types of headlight pair. The present invention can recognize that on the line segment from the left to the right headlight points the pixels are almost in the object region for a car and in the background for two-bike.
- Another problem of car headlight classification is too-bright illumination.
- Two masses of ground-illumination are also classified as car headlight pair because the conditions are met. Normally, there are another two headlight masses behind the ground-illuminations and the present invention can utilize this property to determine if the headlight pair is too bright.
- the present invention takes two points behind the headlight points at a distance of x_diff and uses them as the center points to determine two regions as shown in FIG. 6 .
- further classification given in Eq. (21) is required.
- the concept is that for ground-illumination masses the pixels in ground-illumination mask on AB , AC and BD should occupy the greater part of each line segment.
- too-bright illumination masses are removed from the car headlight mask.
- FIG. 7 shows the structure of a bike.
- the present invention defines four line segments ab , ac , ad and ae . If the conditions in Eq. (22) are met, the shape of object should be thin and the point is classified as a headlight of a bike.
- ⁇ ⁇ pixels ⁇ ⁇ belonging ⁇ ⁇ to ⁇ ⁇ initial ⁇ ⁇ object ⁇ ⁇ mask ⁇ ⁇ on ⁇ ⁇ ad ⁇ _ ⁇ ⁇ are ⁇ ⁇ more ⁇ ⁇ than ⁇ ⁇ 5 ⁇ ⁇ w 2 - 5 ⁇ ⁇ condition ⁇ ⁇ 4.
- the present invention can utilize the headlight information to decide the compensation region for cars and bikes.
- the present invention uses the headlight points and the distance x_diff between them to determine an approximate area for compensation.
- FIG. 8( a ) shows compensation region of car
- FIG. 8( b ) shows compensation region of bike.
- the present invention just uses the width of headlight to decide the region.
- d 1 is
- pixels belonging to the initial object mask in these regions are added to OM g and a new object mask OM com will be generated.
- the present invention employs the headlight information previously obtained to decide the region for shadow region prediction as shown in FIG. 9 .
- the present invention utilizes the concept of color variation introduced in the previous section to detect shadow pixels in the prediction region. Shadow reduces the intensity value without changing the color of ground and the level order of R, G and B components. Using these conditions, shadow is detected in the prediction region by Eq. (23).
- pixels belonging to it are removed from the initial object mask and the object mask is obtained.
- the present invention proposed a method employing the vehicle headlight information to calculate the amount of vehicles.
- a car headlight pair is represented by its midpoint and a bike is represented by its headlight point.
- the present invention defines a region and draws a counting line with y-coordinate y in it as shown in FIG. 10( a ).
- a 10 ⁇ 10 prediction region below the point is defined as shown in FIG. 10( b ). If there is a vehicle point p next in the prediction region of the next frame and moreover the y-coordinate of p next is equal to y while the y-coordinate of p current is smaller than y, the counter of vehicle is increased by one.
- the closing and opening operations of morphological processing are employed to smooth the boundary of object with a 3*3 structuring element.
- the object mask is refined as the final object mask OM final with more complete boundary.
- the proposed video object segmentation method was evaluated with three video sequences: Dong-men bridge 03, Dong-men bridge 06, and Chang-rong bridge 02, and their formats are described in Table 3.
- the software programs are written in C language, compiled and executed on Microsoft Visual C++ 6.0 platform without code optimization.
- Initial object segmentation gives the result of change detection without ground-illumination and shadow excluding. Hence, ground-illumination and shadow still remain in the initial object region.
- the present invention takes only the even number frames as inter-frames for speeding the execution. he final object segmentation is obtained by executing ground-illumination and shadow excluding on the initial object segmentation.
- Dong-men bridge 01 In the evaluation of vehicle counting, the present invention uses other three longer sequences, Dong-men bridge 01, Dong-men bridge 04 and Dong-men bridge 08. Table 4 lists the formats of these test sequences. Table 5 describes the results of vehicle counting. An error positive means that the counter is added by one without any real vehicle passing through the counting line and an error negative indicates that the counter missed one vehicle passing through the counting line.
- Video Sequence Resolution Format Total Frame Dong-men bridge 01 320 ⁇ 240 AVI 1600 Dong-men bridge 04 320 ⁇ 240 AVI 1199 Dong-men bridge 08 320 ⁇ 240 AVI 1002
- Accuracy seg ( 1 - ⁇ ( x , y ) ⁇ [ OM seg ⁇ ( x , y ) ⁇ OM ref ⁇ ( x , y ) ] ⁇ ( x , y ) ⁇ [ OM seg ⁇ ( x , y ) + OM ref ⁇ ( x , y ) ] ) ⁇ 100 ⁇ % ( 24 )
- OM ref (x,y) is the ideal alpha map
- OM seg (x,y) is the object mask obtained from the proposed method
- ⁇ circle around (x) ⁇ is the exclusive OR operator
- + is the OR operator
- the present invention also calculates the error reduction rate of segmentation, as shown in Eq. (25).
- FIGS. 11 , 12 , and 13 show the error reduction rates for Dong-men bridge 03, Dong-men bridge 06 and Chang-rong bridge 02 sequences, respectively.
- the ground-illumination is roughly excluded.
- the accuracy of segmentation is about 46% and the average error reduction rate is 47.22%.
- the rate is low between frame # 400 and # 480 because there are vehicles moving in horizontal direction and the object compensation fails and thus there are very many object pixels excluded in the step of ground-illumination excluding.
- the accuracy of initial object mask is roughly 30% but for the final object mask it is about 56%.
- the accuracy of the initial object mask is close to the final object mask and the error reduction rate is very low.
- the average error reduction rate is 63.36%.
- num real denotes the number of real vehicles passing through the counting line
- num error is the sum of error positives and error negatives.
- Dong-men bridge 01 Normally, vehicles are correctly counted.
- Dong-men bridge 01 one error is due to the mergence of two bikes, another one is due to asymmetric bright headlight pair that will result in misjudgment of headlight pair of a vehicle.
- Dong-men bridge 08 the headlight with low brightness leads to failure in headlight detection so that it is not counted.
- the present invention proposes an effective video segmentation method for dark or nighttime environments. It bases on change detection and background updating. Because there are no complex operators adopted in our method and simple change detection is utilized to obtain the motion information, the proposed method is very effective.
- the concept of color variation is used. It detects most of the illumination pixels and decreases the erroneous object pixels.
- the method for object region compensation can roughly decide the real object area and compensate pixels belonging to the vehicle body to the object mask.
- the headlight information is useful not only for object region compensation but also for shadow region prediction and vehicle counting. Shadow regions are roughly reduced using the similar concept applied to ground-illumination and then the post-processing refines the boundary of object. Finally, by the vehicle counting method, an approximate amount of traffic flow is obtained.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
Description
d t(x,y)=|f t(x,y)−f t−1(x,y)| (4)
where wi(j) is the gray value of j-th pixel in the i-th window.
TABLE 1 |
Object region detection |
Background | Frame | |||
Subtraction | Difference | |||
Region Type | Mask | Mask | ||
Background region | OFF | OFF | ||
Uncovered background region | OFF | ON | ||
Still region | ON | OFF | ||
Moving region | ON | ON | ||
TABLE 2 |
Ground-illumination color under different streetlamp colors. |
Streetlamp |
Headlight | Yellow | White | ||
Yellow | Yellow | Yellow | ||
White | Blue | Blue | ||
R ratio =R current /R back
G ratio =G current /G back
B ratio =B current /B back
Rcurrent, Gcurrent, Bcurrent: value of current frame
Rback, Gback, Bback: value of background frame (11)
if(I current >I back and Rcurrent >G current >B current and B ratio >R ratio and B ratio >G ratio) Ground-illumination (12)
where Icurrent and Iback are the intensity values of the current and background frames.
if(I current >I back and B current >R current and B current >G current and Bratio >R ratio and B ratio >G ratio) Ground-illumination (13)
if(I current >I back and R current >G current >B current and R ratio >G ratio and Rratio >B ratio) Ground-illumination (14)
if(I current>Iback and B current >R current and B current >G current and R ratio >G ratio and R ratio >B ratio) Ground-illumination (15)
G r=(Maxgray−Mingray)/c (16)
where c is a constant controlling the value of Gr.
if(R>(Maxgray −G r) and (Maxgray−2*G r)≦I≦Maxgray) Yellow Headlight
if(B>(Maxgray −G r) and (Maxgray −G r)≦I≦Maxgray) White headlight (17)
where I means the intensity value of the object region.
ratiow/h>c, c is constant (19)
if(ratiow/h>5 and numl,r >th l,r and 0.5<rationum<2) Headlight pair of car (20)
d2 is
d3 is w, and d4 is
These constants can be adjusted to be adaptive for environments with different distances from the camera to the ground.
if(I current <I back and R current >G current >B current and Bratio>Rratio and B ratio >G ratio and ordercurrent=orderback) shadow pixel
if(I current <I back and Bcurrent >G current and Bcurrent >R current and R ratio >G ratio and R ratio >B ratio and ordercurrent=orderback) shadow pixel (23)
TABLE 3 |
The format of test sequences |
Video Sequence | Resolution | Format | Total Frame | ||
Dong-men bridge 03 | 320 × 240 | AVI | 691 | ||
Dong-men bridge 06 | 320 × 240 | AVI | 692 | ||
Chang-rong bridge 02 | 320 × 240 | AVI | 492 | ||
TABLE 4 |
Test sequences for vehicle counting. |
Video Sequence | Resolution | Format | Total Frame | ||
Dong-men bridge 01 | 320 × 240 | AVI | 1600 | ||
Dong-men bridge 04 | 320 × 240 | AVI | 1199 | ||
Dong-men bridge 08 | 320 × 240 | AVI | 1002 | ||
TABLE 5 |
Results of vehicle counting. |
Real | Counted | Error | Error | |
Video Sequence | Vehicle | Vehicle | Positive | Negative |
Dong-men bridge 01 | 11 | 9 | 0 | 2 |
Dong-men bridge 04 | 8 | 8 | 0 | 0 |
Dong-men bridge 08 | 6 | 5 | 0 | 1 |
TABLE 6 |
Average accuracy and error reduction rate. |
Accuracy of | Accuracy of | Error reduction | |
Sequence | OMinitial | OMfinal | rate |
Dong-men bridge 03 | 33.44% | 46.06% | 47.22% |
Dong-men bridge 06 | 30.78% | 56.39% | 63.36% |
Chang-rong bridge 02 | 18.16% | 47.99% | 72.87% |
TABLE 7 |
Accuracy of vehicle counting. |
Sequence | numreal | numerror | Accuracycount (%) | ||
Dong-men bridge 01 | 11 | 2 | 81.8% | |
Dong-men bridge 04 | 8 | 0 | 100% | |
Dong-men bridge 08 | 6 | 1 | 83.3% |
Average accuracy | 88.4% | ||
Claims (5)
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW97123332A | 2008-06-23 | ||
TW097123332 | 2008-06-23 | ||
TW097123332A TW201002073A (en) | 2008-06-23 | 2008-06-23 | Method of vehicle segmentation and counting for nighttime video frames |
Publications (2)
Publication Number | Publication Date |
---|---|
US20090316957A1 US20090316957A1 (en) | 2009-12-24 |
US8019157B2 true US8019157B2 (en) | 2011-09-13 |
Family
ID=41431344
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/248,054 Active 2030-04-30 US8019157B2 (en) | 2008-06-23 | 2008-10-09 | Method of vehicle segmentation and counting for nighttime video frames |
Country Status (2)
Country | Link |
---|---|
US (1) | US8019157B2 (en) |
TW (1) | TW201002073A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090245571A1 (en) * | 2008-03-31 | 2009-10-01 | National Taiwan University | Digital video target moving object segmentation method and system |
US20160364618A1 (en) * | 2015-06-09 | 2016-12-15 | National Chung Shan Institute Of Science And Technology | Nocturnal vehicle counting method based on mixed particle filter |
US11715305B1 (en) | 2022-11-30 | 2023-08-01 | Amitha Nandini Mandava | Traffic detection system using machine vision |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8331623B2 (en) * | 2008-12-23 | 2012-12-11 | National Chiao Tung University | Method for tracking and processing image |
CN101465955B (en) * | 2009-01-05 | 2013-08-21 | 北京中星微电子有限公司 | Method and apparatus for updating background |
CN101827204B (en) * | 2010-04-19 | 2013-07-17 | 成都索贝数码科技股份有限公司 | Method and system for detecting moving object |
EP2413265B1 (en) * | 2010-07-29 | 2017-10-18 | Tata Consultancy Services Ltd. | A system and method for classification of moving object during video surveillance |
WO2013032441A1 (en) * | 2011-08-30 | 2013-03-07 | Hewlett-Packard Development Company, L.P. | Inserting an object into an image |
CN102679957B (en) * | 2012-04-26 | 2013-12-18 | 燕山大学 | Background information and color feature combined fish body detection method |
KR101735565B1 (en) * | 2012-06-25 | 2017-05-15 | 한화테크윈 주식회사 | Method and system for motion detection using elimination of shadow by heat |
EP2821967A1 (en) * | 2013-07-03 | 2015-01-07 | Kapsch TrafficCom AB | Shadow detection in a multiple colour channel image |
CN104732235B (en) * | 2015-03-19 | 2017-10-31 | 杭州电子科技大学 | A kind of vehicle checking method for eliminating the reflective interference of road at night time |
CN104767913B (en) * | 2015-04-16 | 2018-04-27 | 北京思朗科技有限责任公司 | A kind of adaptive video denoising system of contrast |
CN105279754B (en) * | 2015-09-10 | 2018-06-22 | 华南理工大学 | A kind of component dividing method suitable for bicycle video detection |
CN112597806A (en) * | 2020-11-30 | 2021-04-02 | 北京影谱科技股份有限公司 | Vehicle counting method and device based on sample background subtraction and shadow detection |
US11798288B2 (en) | 2021-03-16 | 2023-10-24 | Toyota Research Institute, Inc. | System and method for generating a training set for improving monocular object detection |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6636257B1 (en) * | 1998-08-11 | 2003-10-21 | Honda Giken Kogyo Kabushiki Kaisha | Mobile body recognizing apparatus and motor vehicle monitoring apparatus |
US7483549B2 (en) * | 2004-11-30 | 2009-01-27 | Honda Motor Co., Ltd. | Vehicle surroundings monitoring apparatus |
US7747039B2 (en) * | 2004-11-30 | 2010-06-29 | Nissan Motor Co., Ltd. | Apparatus and method for automatically detecting objects |
-
2008
- 2008-06-23 TW TW097123332A patent/TW201002073A/en unknown
- 2008-10-09 US US12/248,054 patent/US8019157B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6636257B1 (en) * | 1998-08-11 | 2003-10-21 | Honda Giken Kogyo Kabushiki Kaisha | Mobile body recognizing apparatus and motor vehicle monitoring apparatus |
US7483549B2 (en) * | 2004-11-30 | 2009-01-27 | Honda Motor Co., Ltd. | Vehicle surroundings monitoring apparatus |
US7747039B2 (en) * | 2004-11-30 | 2010-06-29 | Nissan Motor Co., Ltd. | Apparatus and method for automatically detecting objects |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090245571A1 (en) * | 2008-03-31 | 2009-10-01 | National Taiwan University | Digital video target moving object segmentation method and system |
US8238605B2 (en) * | 2008-03-31 | 2012-08-07 | National Taiwan University | Digital video target moving object segmentation method and system |
US20160364618A1 (en) * | 2015-06-09 | 2016-12-15 | National Chung Shan Institute Of Science And Technology | Nocturnal vehicle counting method based on mixed particle filter |
US11715305B1 (en) | 2022-11-30 | 2023-08-01 | Amitha Nandini Mandava | Traffic detection system using machine vision |
Also Published As
Publication number | Publication date |
---|---|
TW201002073A (en) | 2010-01-01 |
US20090316957A1 (en) | 2009-12-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8019157B2 (en) | Method of vehicle segmentation and counting for nighttime video frames | |
O'Malley et al. | Rear-lamp vehicle detection and tracking in low-exposure color video for night conditions | |
Unzueta et al. | Adaptive multicue background subtraction for robust vehicle counting and classification | |
US8798314B2 (en) | Detection of vehicles in images of a night time scene | |
TWI409718B (en) | Method of locating license plate of moving vehicle | |
CN112036254B (en) | Moving vehicle foreground detection method based on video image | |
CN102165493B (en) | Detection of vehicles in an image | |
US8045761B2 (en) | Detection of environmental conditions in a sequence of images | |
US10878259B2 (en) | Vehicle detecting method, nighttime vehicle detecting method based on dynamic light intensity and system thereof | |
Engel et al. | A low-complexity vision-based system for real-time traffic monitoring | |
O'malley et al. | Vision-based detection and tracking of vehicles to the rear with perspective correction in low-light conditions | |
KR20070027768A (en) | Method for traffic sign detection | |
Chen | Nighttime vehicle light detection on a moving vehicle using image segmentation and analysis techniques | |
CN105740835B (en) | Front vehicles detection method based on in-vehicle camera under overnight sight | |
Huerta et al. | Exploiting multiple cues in motion segmentation based on background subtraction | |
CN103530640A (en) | Unlicensed vehicle detection method based on AdaBoost and SVM (support vector machine) | |
Buch et al. | Vehicle localisation and classification in urban CCTV streams | |
Chen et al. | Traffic congestion classification for nighttime surveillance videos | |
Li et al. | A low-cost and fast vehicle detection algorithm with a monocular camera for adaptive driving beam systems | |
CN109800693B (en) | Night vehicle detection method based on color channel mixing characteristics | |
CN106529533A (en) | Complex weather license plate positioning method based on multi-scale analysis and matched sequencing | |
JP2012023572A (en) | White balance coefficient calculating device and program | |
CN107066929B (en) | Hierarchical recognition method for parking events of expressway tunnel integrating multiple characteristics | |
Chen et al. | Vehicle detection and counting by using headlight information in the dark environment | |
US20160364618A1 (en) | Nocturnal vehicle counting method based on mixed particle filter |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: HUPER LABORATORIES CO., LTD., TAIWAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHEN, CHAO-HO;JUN-LIANG, CHEN;CHANG, CHOU-MING;REEL/FRAME:021659/0253 Effective date: 20080806 |
|
AS | Assignment |
Owner name: HUPER LABORATORIES CO., LTD., TAIWAN Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE SECOND AND THIRD INVENTOR'S NAME FROM "CHEN JUN-LIANG" TO "JUN-LIANG CHEN" AND FROM "CHOU-MING CHANG" TO "CHAO-MING CHANG" PREVIOUSLY RECORDED ON REEL 021659 FRAME 0253;ASSIGNORS:CHEN, JUN-LIANG;CHANG, CHAO-MING;REEL/FRAME:022143/0684 Effective date: 20090109 Owner name: HUPER LABORATORIES CO., LTD., TAIWAN Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE SECOND AND THIRD INVENTOR'S NAME FROM "CHEN JUN-LIANG" TO "JUN-LIANG CHEN" AND FROM "CHOU-MING CHANG" TO "CHAO-MING CHANG" PREVIOUSLY RECORDED ON REEL 021659 FRAME 0253. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT OF THE ASSIGNOR'S INTEREST;ASSIGNORS:CHEN, JUN-LIANG;CHANG, CHAO-MING;REEL/FRAME:022143/0684 Effective date: 20090109 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2552); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Year of fee payment: 8 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2553); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Year of fee payment: 12 |