CN109684996B - Real-time vehicle access identification method based on video - Google Patents

Real-time vehicle access identification method based on video Download PDF

Info

Publication number
CN109684996B
CN109684996B CN201811576203.XA CN201811576203A CN109684996B CN 109684996 B CN109684996 B CN 109684996B CN 201811576203 A CN201811576203 A CN 201811576203A CN 109684996 B CN109684996 B CN 109684996B
Authority
CN
China
Prior art keywords
vehicle
image
moving
area
background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811576203.XA
Other languages
Chinese (zh)
Other versions
CN109684996A (en
Inventor
孙光民
张子昊
王皓
翁羽
赵莹帝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201811576203.XA priority Critical patent/CN109684996B/en
Publication of CN109684996A publication Critical patent/CN109684996A/en
Application granted granted Critical
Publication of CN109684996B publication Critical patent/CN109684996B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

A real-time vehicle access identification method based on video relates to an image processing method. The method comprises the steps of installing a camera to collect images, detecting a moving target based on the combination of a frame difference method and a dense optical flow method, detecting a foreground based on a background difference method, fusing a moving foreground and extracting a color image of a vehicle area, tracking an angular point of the moving target based on an LK optical flow method, judging the entering of moving vehicles, identifying the color of the moving vehicles, counting the number of the vehicles and outputting a result image. The invention can not only improve the integrity of the vehicle area, but also can segment the vehicle area no matter whether the vehicle moves or not. The vehicle area can be detected when the vehicle stays, and the target can not be lost. The method has strong robustness.

Description

Real-time vehicle access identification method based on video
Technical Field
The invention relates to an image processing method, in particular to a vehicle entering and exiting identification method.
Background
With the development of economy and the continuous progress of science and technology in China, the construction of smart cities is greatly promoted in all big cities. The intelligent parking lot is used as one part of the intelligent parking lot, and a development trend is adopted when the information means is adopted to manage the vehicles in and out of a community. In an efficient unattended parking lot system, the entry behavior of a vehicle needs to be detected accurately and in real time.
Generally, for judging the in-and-out behavior of a vehicle, there are a method using a sensor of geomagnetic induction and infrared induction, an image recognition method based on license plate detection, and a wireless induction method using an identity card. Some of these methods are difficult to install, can only be used in specific scenarios, and some are costly. Secondly, these methods cannot acquire complete vehicle information such as vehicle images and colors. And if the vehicle image needs to be captured, a camera is separately installed to take charge of capturing. And more vehicle information is acquired and stored, so that more certificates are reserved for subsequent events such as vehicle payment and vehicle security, which is an essential link. If a camera is used for the method based on image recognition, the vehicle entering behavior can be detected, and vehicle image information can be acquired.
Besides the vehicle entrance detection mode, the other type is based on an intelligent video detection method. The convolutional neural network is commonly used for detecting the vehicle, but the network model is often very large, the required computer configuration is high, and the requirement on real-time performance is difficult to achieve. Or the traditional classifier is used for extracting the vehicle characteristics to distinguish other objects to achieve the detection effect, but a large amount of data sets of the vehicle under the scene are needed, the classifier is trained, and the workload is large. The method for acquiring the moving target to detect the vehicle is also applied to an intelligent monitoring system on the expressway, and the vehicle is usually required to move continuously, so that the application occasion is fixed.
The moving object detection method has relatively low calculation complexity and is easier to meet the real-time requirement. It includes frame difference method, background modeling method and optical flow method. The frame difference method detects that the moving object has a cavity phenomenon, and only the edge position with larger change can be marked. The background difference method is greatly influenced by illumination, and if the scene is changed by illumination, the pixel value change of an object without motion can be marked as a moving object. In the background modeling method, when phenomena such as a sudden change of a certain pixel, a slightly swaying object and the like occur, the model can be judged wrongly. In the actual scene before the barrier gate, the environment is often complex, and a good effect is difficult to achieve by a general moving target detection method.
Disclosure of Invention
The invention aims to solve the defects of the technology and is used for detecting the entrance and the exit of a moving vehicle in a scene in front of a barrier gate.
In order to achieve the aim, the invention provides a real-time vehicle entering and exiting identification method based on videos, which comprises the following steps of:
step 1, installing a camera to collect images
Step 2, moving object detection based on combination of frame difference method and dense optical flow method
Step 3, foreground detection based on background difference method
Step 4, fusing the motion foreground and extracting a color image of the vehicle region
Step 5, tracking the corner points of the moving target based on LK optical flow method
Step 6, judging the entering of the moving vehicle
And 7, identifying the colors of the moving vehicles, counting the number of the vehicles and outputting a result image.
The invention integrates a plurality of motion detection methods to detect vehicles, and has the following beneficial effects:
1, a moving object detection algorithm based on the combination of a frame difference method and a dense optical flow method. The traditional dense optical flow algorithm can well detect a moving target and accurately segment a moving area, but cannot eliminate the influence caused by external environments such as illumination and the like. The frame difference method has little influence on illumination change, but the divided motion areas have holes and are not communicated. The invention combines two algorithms to detect the moving target, the method has better illumination robustness, and the detected target area has no holes and is a complete connected domain.
And 2, a foreground detection algorithm based on a background difference method. The traditional background difference method is greatly influenced by illumination change and can only be applied in a scene with smaller illumination change in the same scene. The method for updating the background in real time is used for dealing with the change of illumination, and is robust not only to the condition of large slow illumination change from morning to evening, but also to the sudden change of illumination when the scene is suddenly turned on.
And 3, a vehicle detection algorithm combining moving object detection and scene foreground detection. A moving object detection algorithm based on the combination of a frame difference method and a dense optical flow method can perfectly segment a moving area, but a vehicle stops in an actual scene, and the algorithm cannot segment the vehicle area when the vehicle stops. The background differential foreground extraction algorithm for updating the background can be used for segmenting the vehicle foreground from the background when the vehicle stays, but if the color of the vehicle is similar to that of the background, the segmentation area is possibly incomplete. The invention combines the two algorithms, not only can improve the integrity of the vehicle region segmentation, but also can segment the vehicle region regardless of whether the vehicle moves or not. The vehicle area can be detected when the vehicle stays, and the target can not be lost.
And 4, tracking the corner points of the moving object based on an LK optical flow method. The advantage of a motion trajectory recorded using this method is that it is really a moving object that is recorded. If the divided vehicle area is wrong, the divided vehicle area is divided into the changed illumination, the general illumination conversion is only the pixel value change, the feature point cannot move left and right, and the angular point moving distance recorded by the LK optical flow method is small and is close to zero. If the vehicle is a real moving vehicle, the moving distance is large. Thus, using this method, vehicle detection is robust even with varying illumination.
Drawings
FIG. 1 is a flow chart of a video-based real-time vehicle in-and-out identification method according to an embodiment of the invention
FIG. 2 is a view showing a camera mounting
FIG. 3 is an original drawing of a vehicle passing by in front of a barrier gate
FIG. 4 is a gray scale image generated by the frame difference method
FIG. 5 is a binary image generated by dense optical flow
FIG. 6 is an extracted background color image
FIG. 7 is a foreground binary image generated by background difference
FIG. 8 is a color image of a vehicle fused with multiple motion detection methods
FIG. 9 is a diagram of corner effect tracking by LK optical flow method
FIG. 10 is a graph showing the results after the vehicle has entered
FIG. 11 is a diagram of another vehicle processing under this scenario
FIG. 12 is a diagram of a processing procedure of a vehicle stopping under another scenario
FIG. 13 is a processing diagram of the vehicle staying in the scene during the further forward movement
FIG. 14 is a diagram of a process for changing scene caused by illumination
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative only and should not be construed as limiting the invention.
As shown in fig. 1, the video-based real-time vehicle entering and exiting identification method according to the present invention detects vehicles in front of a barrier gate, and specifically comprises the following steps:
step 1, installing a camera to collect images. The invention uses a fisheye wide-angle camera to collect images. The camera is installed on the side of the gateway box, as shown in fig. 2, and the shooting direction is perpendicular to the vehicle entering direction.
And 2, detecting the moving object based on the combination of a frame difference method and a dense optical flow method.
And 2.1, detecting the moving target by a frame difference method. The frame difference method subtracts the corresponding pixel values of the two frames of images. If the difference is small, it is considered stationary here, and if the difference is large, it is considered to be due to the movement of the object. Let the difference image be Yk(i, j), the pixels of the (k + 1) th frame and the k frame image at the (i, j) point are respectively Tk+1(i, j) and Tk(I, j), the result after the threshold processing is Ik(i, j), the frame difference formula is:
Yk(i,j)=|Tk+1(i,j)-Tk(i,j)| (1)
Figure BDA0001916845430000041
i in the above formula is a threshold, here 30. Fig. 3 shows an original image of a region in front of a barrier gate when a vehicle passes through the region. Fig. 4 is a graph showing the gray scale result generated by the frame difference processing.
Step 2.2 dense optical flow moving object detection. And (3) dividing the gray level image generated by the frame difference method processing in the step 2.1 by using a dense optical flow method. The dense optical flow method adopts two adjacent frames of images to estimate the optical flow vector of the object, and calculates the optical flow vector of each pixel point. Firstly, a polynomial expansion method is used, a quadratic polynomial is used for approximately expressing the neighborhood of each pixel, and then the displacement vector of the optical flow field is estimated by analyzing the polynomial expansion coefficients of the pixel points of the previous frame and the next frame.
The results of processing using this algorithm are shown in fig. 5. The vehicle passes by the generally largest moving object from the front of the camera, so if there are multiple moving regions in a scene, only the largest one of the areas is reserved here.
And 3, detecting the foreground based on a background difference method.
And 3.1, acquiring a motion area by background difference. The method uses a video frame to subtract a background image, and then performs threshold processing on a differential gray image to obtain a binary image of an active motion area. Let the background difference image be Yk(i, j), the pixel of the k frame image at the point (i, j) is Mk(i, j), the background image at this time is Bk(I, j), the result after the threshold processing is Ik(i, j), the formula of the background subtraction method is:
Yk(i,j)=|Mk(i,j)-Bk(i,j)| (1)
Figure BDA0001916845430000042
i in the above formula is the threshold, here 50. "1" represents the K frame image occurrence foreground region and "0" represents the background region. Fig. 6 shows a background image of a scene, and fig. 7 shows a foreground binary image obtained by a background subtraction method. Here again only a foreground region of maximum area is retained.
Step 3.2 background update. The average value of the first 100 frames of images of the video is used as an initial background, when no moving object exists in the video, the background image is updated in real time, and when the moving object exists, the background is not updated. The judgment condition considered here as the motion region is two: the area of the moving foreground segmented by the 1 background difference algorithm is less than 50 multiplied by 50. 2, tracking feature points of the front and rear frame images of the area by using an LK optical flow method, wherein the moving distance of the feature points is less than 100.
And 4, fusing the motion foreground and extracting a color image of the vehicle region. And (3) superposing the motion area binary images obtained in the steps 1 and 2, and performing an operation with the original image, wherein the processing result is shown in fig. 8. The vehicle passes through the front of the camera, the moving area is large, therefore, when the area of the moving area is larger than a certain threshold value, the vehicle is considered as the vehicle, the vehicle is reserved for the moving area, and if the area of the moving area is smaller than the threshold value, the vehicle is rejected. The threshold used here is 0.15 times the original image area
And 5, tracking the corner points of the moving target based on the LK optical flow method. The method comprises the steps of firstly converting a vehicle color image into a gray image, detecting key points in the image by using a Shi-Tomasi corner point detection algorithm, and then iteratively tracking the points by using a Lucas-Kanade algorithm. These tracking trajectories are plotted on a graph, the effect being shown in fig. 9.
And 6, judging the moving vehicle to enter and exit. There are two conditions for the judgment of vehicle entrance and exit: 1. more than 10 consecutive images in the video sequence must be detected as the vehicle region of step 4, otherwise it is considered that the environment is changed due to illumination. 2. And recording the moving distance sum of all key points of each frame image, LK optical flow tracking, and the vehicle passing is considered to be a non-vehicle area if the distance sum is more than 100 in the vehicle entering sequence. And judging the movement of the vehicle when the two conditions are simultaneously met, and then judging the in-out direction of the vehicle according to the movement direction of the key point tracked by the LK optical flow method. For example, fig. 3 shows a scene before the basement entrance gate, if the key point moves to the left, it is determined that the vehicle enters, and conversely, the vehicle exits.
And 7, identifying the color of the moving vehicle. For the divided vehicle images, firstly, converting the color space into HSV, then respectively calculating the number of pixel points in the vehicle images in different color ranges according to the quantization templates, and counting the color range with the most corresponding number of the pixel points as the vehicle color. The HSV color statistical template is table 1.
TABLE 1HSV color statistics template
Figure BDA0001916845430000051
Figure BDA0001916845430000061
After the vehicle enters, the number of the vehicles is counted and a result image is output, as shown in fig. 10.
As shown in fig. 11, which is another vehicle processing process diagram in the scene, it is seen that the vehicle regions divided by the dense optical flow binary image and the background difference binary image are not complete, but are complete after being superimposed. In the background difference binary image, the vehicle part and the gate part are also segmented, but the LK optical flow cannot influence the tracking of the corner points.
The algorithm of the invention is robust and can be shared in fig. 12, and the vehicle entering can be well detected.
Fig. 13 shows a flow chart of the vehicle before the gate in another scenario, in which the vehicle is staying, and it can be seen from the graph that the motion area is not detected in the dense optical flow binary image. The background difference method can still segment the vehicle area, the vehicle can still be detected after superposition, and the vehicle cannot be lost when stopped. When the vehicle stops, the corner points tracked by the LK optical flow cannot change, and displacement does not exist.
As shown in fig. 14, when the parked vehicle moves again, the corner points of the Lk optical flow tracking are displaced, a single track is drawn in the figure, and the result is generated after the vehicle enters.
From fig. 12 and 13, it can be analyzed that the algorithm of the present invention is also robust to the stay of the vehicle, and can detect normally without losing the moving object.
As shown in fig. 14, the processing procedure of scene change caused by illumination is shown, in which the dense optical flow method does not detect a moving object, the background difference detects a changed area, and the changed area caused by illumination change is separated after synthesis. This area is not a moving vehicle area, but the step of LK optical flow tracking the corner points does not detect the corner points on the vehicle body and has no motion trajectory. And the change of illumination is sudden, only short two or three frames of images exist, and the video sequence does not exceed a continuous 10-frame motion area and is not larger than a threshold value condition when a moving vehicle enters and exits for judgment. Therefore, even if the illumination changes in the changed area, the last vehicle detection result is not affected.
From fig. 14, it can be analyzed that the algorithm of the present invention is robust to illumination changes, and does not affect the detection of vehicle ingress and egress. The algorithm achieves good effect.

Claims (1)

1. The real-time vehicle entering and exiting identification method based on the video is characterized by comprising the following specific implementation steps of:
step 1, installing a camera to collect images
Step 2, detecting the moving object based on the combination of the frame difference method and the dense optical flow method, specifically as follows:
step 2.1 moving object detection by frame difference method
Subtracting the corresponding pixel values of the front and rear frames of images by using a frame difference method;
let the difference image be Yk(i, j), the pixels of the (k + 1) th frame and the k frame image at the (i, j) point are respectively Tk+1(i, j) and Tk(I, j), the result after the threshold processing is Ik(i, j), the frame difference formula is:
Yk(i,j)=|Tk+1(i,j)-Tk(i,j)| (1)
Figure FDA0002705271560000011
i in the above formula is a threshold, here 30;
step 2.2 dense optical flow moving object detection
Dividing the gray level image generated by the frame difference method processing in the step 2.1 into moving objects by using a dense optical flow method;
the vehicle passes through a generally largest moving object from the front of the camera, and if a plurality of moving areas exist in a scene, only one moving area with the largest area is reserved;
step 3, foreground detection based on a background difference method is specifically as follows:
step 3.1, obtaining a motion area by background difference;
subtracting the background image from the video frame, and then performing threshold processing on the differential gray image to obtain a binary image of the moving area; let the background difference image be Yk(i, j), the pixel of the k frame image at the point (i, j) is Mk(i,j),The background image at this time is Bk(I, j), the result after the threshold processing is Ik(i, j), the formula of the background subtraction method is:
Yk(i,j)=|Mk(i,j)-Bk(i,j)| (1)
Figure FDA0002705271560000012
i in the above formula is a threshold, here 50; "1" represents the occurrence foreground region of the K frame image, and "0" represents the background region;
obtaining a foreground binary image by a background difference method; only one foreground region with the largest area is reserved;
step 3.2, updating the background;
the average value of the images above the first 100 frames of the video is used as an initial background, when no moving object exists in the video, the background image is updated in real time, and when the moving object exists, the background is not updated;
the judgment condition for not being a motion region is considered here to be two: 1, the area of the motion foreground divided by the background difference algorithm is less than 50 multiplied by 50; 2, tracking feature points of the front and rear frame images of the area by using an LK optical flow method, wherein the moving distance of the feature points is less than 100;
step 4, fusing the motion foreground and extracting a color image of the vehicle region;
overlapping the binary images of the motion area obtained in the step 2 and the step 3, performing an operation with the original image,
the vehicle passes through the front of the camera, the moving area is large, therefore, when the area of the moving area is larger than a certain threshold value, the vehicle is considered as the vehicle, the vehicle is reserved for the moving area, and if the area of the moving area is smaller than the threshold value, the vehicle is removed; the threshold used here is 0.15 times the original image area;
step 5, tracking the corner points of the moving target based on an LK optical flow method; converting a vehicle color image into a gray image, detecting key points in the image by using a Shi-Tomasi angular point detection algorithm, and then iteratively tracking the points by using a Lucas-Kanade algorithm; drawing the tracking tracks on a graph;
step 6, judging the moving vehicles to enter and exit;
there are two conditions for the judgment of vehicle entrance and exit: 1. continuously more than 10 frames of images in the video sequence are detected as the vehicle area in the step 4, otherwise, the environment change caused by illumination is considered; 2. recording the moving distance sum of all key points tracked by each frame of image and LK optical flow, and considering that the vehicle passes through if the distance sum is greater than 100 in the vehicle entering sequence, otherwise, the vehicle is a non-vehicle area; judging that the vehicle moves when the two conditions are met, and judging the entering and exiting directions of the vehicle according to the moving direction of the key points tracked by the LK optical flow method;
step 7, identifying the color of the moving vehicle;
for the divided vehicle images, firstly converting the color space into HSV, then respectively calculating the number of pixel points in the vehicle images in different color ranges according to a quantization template, and counting the color range with the largest number of the pixel points as the vehicle color; and after the vehicles enter, counting the number of the vehicles and outputting a result image.
CN201811576203.XA 2018-12-22 2018-12-22 Real-time vehicle access identification method based on video Active CN109684996B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811576203.XA CN109684996B (en) 2018-12-22 2018-12-22 Real-time vehicle access identification method based on video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811576203.XA CN109684996B (en) 2018-12-22 2018-12-22 Real-time vehicle access identification method based on video

Publications (2)

Publication Number Publication Date
CN109684996A CN109684996A (en) 2019-04-26
CN109684996B true CN109684996B (en) 2020-12-04

Family

ID=66188956

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811576203.XA Active CN109684996B (en) 2018-12-22 2018-12-22 Real-time vehicle access identification method based on video

Country Status (1)

Country Link
CN (1) CN109684996B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110189494A (en) * 2019-07-01 2019-08-30 深圳江行联加智能科技有限公司 A kind of substation's exception monitoring alarm system
CN111083363A (en) * 2019-12-16 2020-04-28 河南铭视科技股份有限公司 Video recorder management system
CN111523385B (en) * 2020-03-20 2022-11-04 北京航空航天大学合肥创新研究院 Stationary vehicle detection method and system based on frame difference method
CN111626179B (en) * 2020-05-24 2023-04-28 中国科学院心理研究所 Micro-expression detection method based on optical flow superposition
CN111781600B (en) * 2020-06-18 2023-05-30 重庆工程职业技术学院 Vehicle queuing length detection method suitable for signalized intersection scene
CN111914627A (en) * 2020-06-18 2020-11-10 广州杰赛科技股份有限公司 Vehicle identification and tracking method and device
CN112200101B (en) * 2020-10-15 2022-10-14 河南省交通规划设计研究院股份有限公司 Video monitoring and analyzing method for maritime business based on artificial intelligence
CN112597953B (en) * 2020-12-28 2024-04-09 深圳市捷顺科技实业股份有限公司 Method, device, equipment and medium for detecting passerby in passerby area in video
CN113066306B (en) * 2021-03-23 2022-07-08 超级视线科技有限公司 Management method and device for roadside parking
CN113705434A (en) * 2021-08-27 2021-11-26 浙江新再灵科技股份有限公司 Detection method and detection system for gas tank in straight ladder
CN113793508B (en) * 2021-09-27 2023-06-16 深圳市芊熠智能硬件有限公司 Anti-interference rapid detection method for entrance and exit unlicensed vehicle

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102044151A (en) * 2010-10-14 2011-05-04 吉林大学 Night vehicle video detection method based on illumination visibility identification

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156985A (en) * 2011-04-11 2011-08-17 上海交通大学 Method for counting pedestrians and vehicles based on virtual gate
CN102999759B (en) * 2012-11-07 2015-10-07 东南大学 A kind of state of motion of vehicle method of estimation based on light stream
CN104616497B (en) * 2015-01-30 2017-03-15 江南大学 Public transport emergency detection method
CN105608431A (en) * 2015-12-22 2016-05-25 杭州中威电子股份有限公司 Vehicle number and traffic flow speed based highway congestion detection method
CN107067417A (en) * 2017-05-11 2017-08-18 南宁市正祥科技有限公司 The moving target detecting method that LK optical flow methods and three frame difference methods are combined
CN107895379A (en) * 2017-10-24 2018-04-10 天津大学 The innovatory algorithm of foreground extraction in a kind of video monitoring
CN107844772A (en) * 2017-11-09 2018-03-27 汕头职业技术学院 A kind of motor vehicle automatic testing method based on movable object tracking

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102044151A (en) * 2010-10-14 2011-05-04 吉林大学 Night vehicle video detection method based on illumination visibility identification

Also Published As

Publication number Publication date
CN109684996A (en) 2019-04-26

Similar Documents

Publication Publication Date Title
CN109684996B (en) Real-time vehicle access identification method based on video
Hu et al. Moving object detection and tracking from video captured by moving camera
US9158985B2 (en) Method and apparatus for processing image of scene of interest
Lai et al. Image-based vehicle tracking and classification on the highway
US9672434B2 (en) Video-based system and method for parking occupancy detection
Barcellos et al. A novel video based system for detecting and counting vehicles at user-defined virtual loops
Chen et al. An enhanced segmentation on vision-based shadow removal for vehicle detection
Kumar et al. An efficient approach for detection and speed estimation of moving vehicles
WO2022027931A1 (en) Video image-based foreground detection method for vehicle in motion
Toropov et al. Traffic flow from a low frame rate city camera
Denman et al. Multi-spectral fusion for surveillance systems
Saran et al. Traffic video surveillance: Vehicle detection and classification
Niu et al. A moving objects detection algorithm based on improved background subtraction
Zhao et al. APPOS: An adaptive partial occlusion segmentation method for multiple vehicles tracking
Chandrasekhar et al. A survey of techniques for background subtraction and traffic analysis on surveillance video
Hsieh et al. Grid-based template matching for people counting
Dave et al. Statistical survey on object detection and tracking methodologies
Tourani et al. Challenges of video-based vehicle detection and tracking in intelligent transportation systems
Fihl et al. Tracking of individuals in very long video sequences
Kapileswar et al. Automatic traffic monitoring system using lane centre edges
Liu et al. Shadow Elimination in Traffic Video Segmentation.
CN113627383A (en) Pedestrian loitering re-identification method for panoramic intelligent security
Zhao et al. Research on vehicle detection and vehicle type recognition under cloud computer vision
Ran et al. Multi moving people detection from binocular sequences
Chaiyawatana et al. Robust object detection on video surveillance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant