CN109684996A - Real-time vehicle based on video passes in and out recognition methods - Google Patents
Real-time vehicle based on video passes in and out recognition methods Download PDFInfo
- Publication number
- CN109684996A CN109684996A CN201811576203.XA CN201811576203A CN109684996A CN 109684996 A CN109684996 A CN 109684996A CN 201811576203 A CN201811576203 A CN 201811576203A CN 109684996 A CN109684996 A CN 109684996A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- moving
- image
- optical flow
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/269—Analysis of motion using gradient-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
Real-time vehicle disengaging recognition methods based on video is related to a kind of image processing method.The present invention includes installation camera collection image, the moving object detection combined based on frame difference method and dense optical flow method, foreground detection based on background subtraction, fusional movement prospect simultaneously extracts vehicle region color image, the angle point of moving target based on LK optical flow method tracks, moving vehicle enters judgement, the identification of moving vehicle color, and calculating vehicle number simultaneously exports result images.The present invention not only can be improved the integrality in dividing vehicle region, while regardless of whether vehicle in movement can be partitioned into vehicle region.Also it can detecte out vehicle region when stoppage of vehicle, target will not be lost.This method has very strong robustness.
Description
Technical field
The present invention relates to a kind of image processing method, in particular to a kind of vehicles while passing recognition methods.
Background technique
With China's expanding economy, scientific and technical is constantly progressive, and smart city is all being carried forward vigorously in each big city
Construction.Intelligent parking lot is as a part therein, and one development becomes when managing the disengaging of district vehicles using information-based means
Gesture.In efficient unattended parking system, accurate, real-time detection is needed to go out the entrance behavior of vehicle.
Usually to the judgement of vehicles while passing behavior, there is the method for the sensor using earth induction, infrared induction, be based on
The image-recognizing method of car plate detection, there are also the wireless induction modes for using ID card.Have in these methods it is difficult to install, only
Special scenes can be used for, what is had to be with high costs.Secondly these methods can not obtain the complete vehicle letter such as vehicle image, color
Breath.Separately installed camera is wanted to be responsible for candid photograph if capturing vehicle image.And obtain, save more information of vehicles, be conducive to for
There are more vouchers for the successors such as vehicle payment, vehicle security, this is essential link.If being based on using camera
The method of image recognition can not only detect vehicle and enter behavior, moreover it is possible to obtain vehicle image information.
In addition to above-mentioned vehicle enters detection mode, another kind of is based on intelligent video-detect method.It common are using volume
Product neural network detects vehicle, but network model is often very big, and the allocation of computer needed is higher, it is difficult to reach wanting for real-time
It asks.Or traditional classifier is used, vehicle characteristics are extracted to distinguish other objects, achieve the effect that detection, but this is needed greatly
The data set of vehicle, is trained classifier under the scene of amount, and workload is very big.There are also the intelligence of application on a highway
Monitoring system obtains the method detection vehicle of moving target, generally requires the continuous movement of vehicle, application is fixed.
Moving object detection method computation complexity is relatively lower, it is easier to meet requirement of real-time.It includes frame difference method, back
Scape calculus of finite differences, background modeling method, optical flow method etc..Frame difference method detection moving object will appear cavitation, and only marginal position becomes
Change bigger can be marked to come.Background subtraction influenced by illumination it is bigger, if scene occur illumination variation, do not have
The object pixel value change of movement can be marked as moving object.Unexpected variation, slight of the background modeling method in some pixel
When phenomena such as object of shaking occurs, model judges mistake.Under banister in front of the door this actual scene, often environment is more multiple
Miscellaneous, general moving target detecting method is difficult to reach good effect.
Summary of the invention
The defect for aiming to solve the problem that mesh of the invention above-mentioned technology, the inspection passed in and out for moving vehicle under banister in front of the door scene
It surveys.
For with reaching above-mentioned mesh, the present invention proposes a kind of real-time vehicle disengaging recognition methods based on video, including following
Step:
Step 1, camera collection image is installed
Step 2, the moving object detection combined based on frame difference method and dense optical flow method
Step 3, based on the foreground detection of background subtraction
Step 4, fusional movement prospect and vehicle region color image is extracted
Step 5, the angle point of the moving target based on LK optical flow method tracks
Step 6, moving vehicle enters judgement
Step 7, moving vehicle color identifies, calculating vehicle number simultaneously exports result images.
Multi-motion detection method is merged the beneficial effect that detection vehicle reaches by the present invention the following aspects:
The 1 moving object detection algorithm combined based on frame difference method and dense optical flow method.Traditional dense optical flow algorithm can
To detect moving target well, the external environments bring such as it is accurately partitioned into moving region, but illumination cannot be eliminated
It influences.Frame difference method is small on illumination variation influence, but the moving region being partitioned into has cavity, is not connected to.The present invention is by two algorithms
In conjunction with detection moving target, this method has preferable illumination robustness, and the target area detected is one complete without cavity
Connected domain.
The 2 foreground detection algorithms based on background subtraction.Traditional background subtraction is influenced to compare by illumination variation
Greatly, it can only be applied under the lesser scene of Same Scene illumination variation.The present invention copes with light using the method for real-time update background
According to variation, the not only situation robust big to from morning to night this slow illumination variation, but also this light of turning on light suddenly to scene
According to mutation robust.
Vehicle detecting algorithm of 3 moving object detections in conjunction with scene foreground detection.Based on frame difference method and dense optical flow method
In conjunction with moving object detection algorithm, can perfectly be partitioned into very much moving region, but vehicle has pause in actual scene
The phenomenon that, the algorithm cannot be partitioned into vehicle region when pause.It can be with using the background difference foreground extraction algorithm for updating background
Vehicle foreground can also be split from background in stoppage of vehicle, but if vehicle color is close with background color, it can
The incomplete phenomenon of cut zone can be will appear.The present invention combines above two algorithm, and dividing vehicle area not only can be improved
The integrality in domain, at the same regardless of vehicle whether movement can be partitioned into vehicle region.It also can detecte out when stoppage of vehicle
Vehicle region will not lose target.
The angle point of 4 moving targets based on LK optical flow method tracks.Using this method record motion profile the advantages of be,
Record is strictly moving object.If the vehicle region error being partitioned into, what is be partitioned into is the illumination of variation, and general illumination becomes
Changing only pixel value variation, characteristic point will not cause to move left and right, and the angle point moving distance of LK optical flow method record is smaller, almost be
Zero.If it is true move vehicle, moving distance can be very big.Even if therefore changing illumination, vehicle detection using this method
Still robust.
Detailed description of the invention
Fig. 1 is that the real-time vehicle based on video of present example passes in and out recognition methods flow chart
Fig. 2 is camera installation diagram
Fig. 3 is banister original image when vehicle passes through in front of the door
Fig. 4 is the grayscale image that frame difference method generates
Fig. 5 is the binary map that dense optical flow method generates
Fig. 6 is the background color image extracted
Fig. 7 is the prospect bianry image that background difference generates
Fig. 8 is the fused vehicle color image of detection method of doing more physical exercises
Fig. 9 is that LK optical flow method tracks angle point effect picture
Figure 10 is the result figure after vehicle enters
Figure 11 is another vehicle treatment process figure under the scene
Figure 12 is treatment process figure when vehicle is stopping under another scene
Figure 13 is the treatment process figure vehicle stopped under the scene moves on again when
Figure 14 is that illumination causes scene changes treatment process figure
Specific embodiment
The embodiment of the present invention is described below in detail, examples of the embodiments are shown in the accompanying drawings, wherein from beginning to end
Same or similar label indicates same or similar element or element with the same or similar functions.Below with reference to attached
The embodiment of figure description is exemplary, and for explaining only the invention, and is not construed as limiting the claims.
As shown in Figure 1, the real-time vehicle based on video passes in and out recognition methods according to the present invention, to banister, vehicle is examined in front of the door
It surveys, specific implementation step is as follows:
Step 1, camera collection image is installed.The present invention acquires image using a kind of flake wide-angle camera.Camera
It is mounted on banister case side, as shown in Fig. 2, with vehicle to drive into direction vertical for shooting direction.
Step 2, the moving object detection combined based on frame difference method and dense optical flow method.
Step 2.1 frame difference method moving object detection.Frame difference method subtracts each other front and back two field pictures respective pixel value.If difference
Very little, it is believed that it is static herein, if difference is very big, then it is assumed that be due to caused by object of which movement.If difference image is Yk(i,
J) ,+1 frame of kth is respectively T in the pixel of (i, j) point with kth frame imagek+1(i, j) and Tk(i, j), the result after threshold process
For Ik(i, j), then frame difference method formula are as follows:
Yk(i, j)=| Tk+1(i, j)-Tk(i, j) | (1)
I in above-mentioned formula is threshold value, is herein 30.Original when being illustrated in figure 3 banister region having vehicle to pass through in front of the door
Figure.It is illustrated in figure 4 the gray scale result figure that frame difference method processing generates.
Step 2.2 dense optical flow moving object detection.The gray level image that the processing of step 2.1 frame difference method generates is used dense
Optical flow method divides moving target.Dense optical flow method estimates the light stream vector of object using adjacent two field pictures, calculates each
The light stream vector of pixel.The method for using polynomial expansion first uses a quadratic polynomial to the neighborhood of each pixel
Carry out approximate expression, then by the multinomial coefficient of two frame pixels of analysis front and back, estimates the displacement vector of optical flow field.
It is as shown in Figure 5 using the algorithm process result.Vehicle passes through generally maximum moving object before camera, because
If having multiple moving regions under this scene, here maximum one of Retention area.
Foreground detection of the step 3 based on background subtraction.
Step 3.1 background subtraction, which is separately won, takes moving region.This method uses video frame and background image subtraction, later to difference
Gray level image does threshold process mobilizing exercises region bianry image.If background difference image is Yk(i, j), kth frame image (i,
J) pixel put is Mk(i, j), background image at this time are Bk(i, j), the result after threshold process are Ik(i, j), background difference
Method formula are as follows:
Yk(i, j)=| Mk(i, j)-Bk(i, j) | (1)
I in above-mentioned formula is threshold value, is herein 50." 1 " represents k-th frame image and foreground area occurs, and " 0 " represents background
Region.It is illustrated in figure 6 scene background image, if Fig. 7 is the prospect binary map that background subtraction obtains.Equally only retain here
The maximum one piece of foreground area of area.
Step 3.2 context update.Use the average value of 100 frame images before video as initial background, when not having in video
When having moving object, real-time update background image does not update background when having moving object.It is considered the judgement of moving region herein
There are two conditions: the sport foreground area that 1 background difference algorithm is partitioned into is less than 50 × 50.Two field pictures make before and after 2 regions
With LK optical flow method tracking characteristics point, characteristic point moving distance is less than 100.
Step 4 fusional movement prospect simultaneously extracts vehicle region color image.The moving region that step 1 and step 2 are obtained
Bianry image superposition, and do and operate with original image, processing result is as shown in Figure 8.Vehicle passes through before camera, moving surface
Product is larger, therefore vehicle is considered when moving region area is greater than certain threshold value, and to this reservation, then picks if it is less than threshold value
It removes.Threshold value used herein is 0.15 times of original image area
Step 5, the angle point of the moving target based on LK optical flow method tracks.Vehicle cromogram is converted into grayscale image first
Then picture uses Lucas-Kanade algorithm iteration using the key point in Shi-Tomasi Corner Detection Algorithm detection image
Track these points.These pursuit paths are drawn on the diagram, effect is as shown in Figure 9.
Step 6, moving vehicle disengaging judgement.There are two conditions for the judgement of vehicles while passing: 1, must have company in video sequence
The continuous 10 frame images that are greater than are detected the vehicle region for step 4, otherwise it is assumed that being environmental change caused by illumination.2, record is every
One frame image, all key point moving distances of LK optical flow tracking and, vehicle enters appearance distance and recognizing greater than 100 in sequence
Pass through for vehicle, is otherwise non-vehicle region.Meet the movement that the two conditions are judged as vehicle simultaneously, then according to LK light stream
The key point moving direction of method tracking judges vehicles while passing direction.Such as Fig. 3 is the scene before the access hatch of ground library, if key point
It is moved to the left, is judged as that vehicle enters, is driven out on the contrary for vehicle.
Step 7, moving vehicle color identifies.To the vehicle image being partitioned into, first by color space conversion to HSV, so
The pixel number in vehicle image in different color range is calculated separately according to quantization template afterwards, counts pixel number
Color gamut most pair is vehicle color.Hsv color statistical mask is table 1.
1 hsv color statistical mask of table
After vehicle enters, calculating vehicle number simultaneously exports result images, as shown in Figure 10.
It is as shown in figure 11 another vehicle treatment process figure under the scene, dense optical flow binary map and background as seen from the figure
Vehicle region that difference binary map is partitioned into is simultaneously imperfect, but more complete after being superimposed.In background difference binary map, in addition to segmentation
Vehicle sections out have also been partitioned into gate portion, but can't have an impact in LK optical flow tracking angle point.
Inventive algorithm more robust can be shared from Figure 12, can be very good to detect that vehicle enters.
Flow chart is handled before gate for vehicle under another scene as shown in figure 13, vehicle is stopping at this time, Cong Tuzhong
As can be seen that dense optical flow binary map is not tested with moving region.Background subtraction can still be partitioned into vehicle area
Domain, still can detecte out vehicle after superposition, vehicle stopping will not lose.Vehicle stops, and the angle point of LK optical flow tracking will not
Variation, there is no displacements.
Continue to move again for standing vehicle as shown in figure 14, the angle point of Lk optical flow tracking generates displacement, drawn in the figure
The track of one rule, vehicle generate result after entering.
From Figure 12,13 be also robust when can analyze inventive algorithm to stoppage of vehicle, can normally detect, will not lose
Lose moving target.
If Figure 14 is that illumination causes scene changes treatment process figure, movement mesh has been not detected in dense optical flow method at this time
Mark, background subtraction sorting measure the region changed, the region of variation due to caused by illumination variation have been isolated after synthesis.This region
It is not the vehicle region of movement, but the step for LK optical flow tracking angle point, the angle point on vehicle body is not detected, does not transport
Dynamic rail mark.And two, three unexpected when the variation of illumination, only short frame images, it is not above in video sequence continuous
10 frame moving regions are not greater than threshold condition when moving vehicle disengaging judges.Even if therefore variation caused by illumination variation
Region will not have an impact last vehicle detection result.
It can analyze out inventive algorithm to illumination variation robust from Figure 14, will not influence the detection of vehicles while passing.The calculation
Method has reached good effect.
Claims (2)
1. a kind of real-time vehicle based on video passes in and out recognition methods, which comprises the following steps:
Step 1, camera collection image is installed;
Step 2, the moving object detection combined based on frame difference method and dense optical flow method;
Step 3, based on the foreground detection of background subtraction;
Step 4, fusional movement prospect and vehicle region color image is extracted;
Step 5, the angle point of the moving target based on LK optical flow method tracks;
Step 6, moving vehicle enters judgement;
Step 7, moving vehicle color identifies, calculating vehicle number simultaneously exports result images.
2. the real-time vehicle based on video passes in and out recognition methods according to claim 1, which is characterized in that specific implementation step is such as
Under:
Step 1, camera collection image is installed
Step 2, the moving object detection combined based on frame difference method and dense optical flow method, specific as follows:
Step 2.1 frame difference method moving object detection
Frame difference method subtracts each other front and back two field pictures respective pixel value;
If difference image is Yk(i, j) ,+1 frame of kth and kth frame image are respectively T in the pixel of (i, j) pointk+1(i, j) and Tk(i,
J), the result after threshold process is Ik(i, j), then frame difference method formula are as follows:
Yk(i, j)=| Tk+1(i, j)-Tk(i, j) | (1)
I in above-mentioned formula is threshold value, is herein 30;
Step 2.2 dense optical flow moving object detection
The gray level image that the processing of step 2.1 frame difference method generates is divided into moving target using dense optical flow method;
Vehicle passes through generally maximum moving object before camera, if having multiple moving regions under a scene, here
Retention area maximum one;
Foreground detection of the step 3 based on background subtraction, specific as follows:
Step 3.1 background subtraction, which is separately won, takes moving region;
Using video frame and background image subtraction, threshold process mobilizing exercises region binary map is done to difference gray level image later
Picture;If background difference image is Yk(i, j), kth frame image are M in the pixel of (i, j) pointk(i, j), background image at this time are
Bk(i, j), the result after threshold process are Ik(i, j), background subtraction formula are as follows:
Yk(i, j)=| Mk(i, j)-Bk(i, j) | (1)
I in above-mentioned formula is threshold value, is herein 50;" 1 " represents k-th frame image and foreground area occurs, and " 0 " represents background area
Domain;
The prospect binary map that background subtraction obtains;The equally maximum one piece of foreground area of Retention area here;
Step 3.2 context update;
Use the average value of 100 frame images above before video as initial background, when there is no moving object in video, in real time
Background image is updated, does not update background when having moving object;
There are two the Rule of judgment for being considered moving region herein: the sport foreground area that 1 background difference algorithm is partitioned into is less than
50×50;Two field pictures use LK optical flow method tracking characteristics point before and after 2 regions, and characteristic point moving distance is less than 100;
Step 4 fusional movement prospect simultaneously extracts vehicle region color image;
The moving region bianry image superposition that step 1 and step 2 are obtained, and do and operate with original image,
Vehicle passes through before camera, and moving areas is larger, therefore is considered when moving region area is greater than certain threshold value
Vehicle, and to this reservation, it is then rejected if it is less than threshold value;Threshold value used herein is 0.15 times of original image area;
Step 5, the angle point of the moving target based on LK optical flow method tracks;Vehicle cromogram is converted into gray level image, is used
Then key point in Shi-Tomasi Corner Detection Algorithm detection image tracks these using Lucas-Kanade algorithm iteration
Point;These pursuit paths are drawn on the diagram;
Step 6, moving vehicle disengaging judgement;
Vehicles while passing judges that there are two conditions: 1, must have in video sequence and continuously be detected greater than 10 frame images as step
4 vehicle region, otherwise it is assumed that being environmental change caused by illumination;2, each frame image is recorded, the institute of LK optical flow tracking is related
Key point moving distance and, otherwise it is non-vehicle area that vehicle, which enters appearance distance in sequence and thinks that vehicle passes through greater than 100,
Domain;Meet the movement that the two conditions are judged as vehicle simultaneously, is then sentenced according to the key point moving direction that LK optical flow method tracks
Disconnected vehicles while passing direction;
Step 7, moving vehicle color identifies;
To the vehicle image being partitioned into, first by color space conversion to HSV, vehicle figure is then calculated separately according to quantization template
The pixel number in different color range as in, counting the color gamut of pixel number most pair is vehicle color;Vehicle
Enter after, calculating vehicle number simultaneously exports result images.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811576203.XA CN109684996B (en) | 2018-12-22 | 2018-12-22 | Real-time vehicle access identification method based on video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811576203.XA CN109684996B (en) | 2018-12-22 | 2018-12-22 | Real-time vehicle access identification method based on video |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109684996A true CN109684996A (en) | 2019-04-26 |
CN109684996B CN109684996B (en) | 2020-12-04 |
Family
ID=66188956
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811576203.XA Active CN109684996B (en) | 2018-12-22 | 2018-12-22 | Real-time vehicle access identification method based on video |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109684996B (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110189494A (en) * | 2019-07-01 | 2019-08-30 | 深圳江行联加智能科技有限公司 | A kind of substation's exception monitoring alarm system |
CN111083363A (en) * | 2019-12-16 | 2020-04-28 | 河南铭视科技股份有限公司 | Video recorder management system |
CN111523385A (en) * | 2020-03-20 | 2020-08-11 | 北京航空航天大学合肥创新研究院 | Stationary vehicle detection method and system based on frame difference method |
CN111626179A (en) * | 2020-05-24 | 2020-09-04 | 中国科学院心理研究所 | Micro-expression detection method based on optical flow superposition |
CN111781600A (en) * | 2020-06-18 | 2020-10-16 | 重庆工程职业技术学院 | Vehicle queuing length detection method suitable for signalized intersection scene |
CN111914627A (en) * | 2020-06-18 | 2020-11-10 | 广州杰赛科技股份有限公司 | Vehicle identification and tracking method and device |
CN112200101A (en) * | 2020-10-15 | 2021-01-08 | 河南省交通规划设计研究院股份有限公司 | Video monitoring and analyzing method for maritime business based on artificial intelligence |
CN112597953A (en) * | 2020-12-28 | 2021-04-02 | 深圳市捷顺科技实业股份有限公司 | Method, device, equipment and medium for detecting pedestrians in channel gate area in video |
CN113066306A (en) * | 2021-03-23 | 2021-07-02 | 超级视线科技有限公司 | Management method and device for roadside parking |
CN113705434A (en) * | 2021-08-27 | 2021-11-26 | 浙江新再灵科技股份有限公司 | Detection method and detection system for gas tank in straight ladder |
CN113793508A (en) * | 2021-09-27 | 2021-12-14 | 深圳市芊熠智能硬件有限公司 | Entrance and exit unlicensed vehicle anti-interference rapid detection method |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102044151A (en) * | 2010-10-14 | 2011-05-04 | 吉林大学 | Night vehicle video detection method based on illumination visibility identification |
CN102156985A (en) * | 2011-04-11 | 2011-08-17 | 上海交通大学 | Method for counting pedestrians and vehicles based on virtual gate |
CN102999759A (en) * | 2012-11-07 | 2013-03-27 | 东南大学 | Light stream based vehicle motion state estimating method |
CN104616497A (en) * | 2015-01-30 | 2015-05-13 | 江南大学 | Public transportation emergency detection method |
CN105608431A (en) * | 2015-12-22 | 2016-05-25 | 杭州中威电子股份有限公司 | Vehicle number and traffic flow speed based highway congestion detection method |
CN107067417A (en) * | 2017-05-11 | 2017-08-18 | 南宁市正祥科技有限公司 | The moving target detecting method that LK optical flow methods and three frame difference methods are combined |
CN107844772A (en) * | 2017-11-09 | 2018-03-27 | 汕头职业技术学院 | A kind of motor vehicle automatic testing method based on movable object tracking |
CN107895379A (en) * | 2017-10-24 | 2018-04-10 | 天津大学 | The innovatory algorithm of foreground extraction in a kind of video monitoring |
-
2018
- 2018-12-22 CN CN201811576203.XA patent/CN109684996B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102044151A (en) * | 2010-10-14 | 2011-05-04 | 吉林大学 | Night vehicle video detection method based on illumination visibility identification |
CN102156985A (en) * | 2011-04-11 | 2011-08-17 | 上海交通大学 | Method for counting pedestrians and vehicles based on virtual gate |
CN102999759A (en) * | 2012-11-07 | 2013-03-27 | 东南大学 | Light stream based vehicle motion state estimating method |
CN104616497A (en) * | 2015-01-30 | 2015-05-13 | 江南大学 | Public transportation emergency detection method |
CN105608431A (en) * | 2015-12-22 | 2016-05-25 | 杭州中威电子股份有限公司 | Vehicle number and traffic flow speed based highway congestion detection method |
CN107067417A (en) * | 2017-05-11 | 2017-08-18 | 南宁市正祥科技有限公司 | The moving target detecting method that LK optical flow methods and three frame difference methods are combined |
CN107895379A (en) * | 2017-10-24 | 2018-04-10 | 天津大学 | The innovatory algorithm of foreground extraction in a kind of video monitoring |
CN107844772A (en) * | 2017-11-09 | 2018-03-27 | 汕头职业技术学院 | A kind of motor vehicle automatic testing method based on movable object tracking |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110189494A (en) * | 2019-07-01 | 2019-08-30 | 深圳江行联加智能科技有限公司 | A kind of substation's exception monitoring alarm system |
CN111083363A (en) * | 2019-12-16 | 2020-04-28 | 河南铭视科技股份有限公司 | Video recorder management system |
CN111523385A (en) * | 2020-03-20 | 2020-08-11 | 北京航空航天大学合肥创新研究院 | Stationary vehicle detection method and system based on frame difference method |
CN111523385B (en) * | 2020-03-20 | 2022-11-04 | 北京航空航天大学合肥创新研究院 | Stationary vehicle detection method and system based on frame difference method |
CN111626179A (en) * | 2020-05-24 | 2020-09-04 | 中国科学院心理研究所 | Micro-expression detection method based on optical flow superposition |
CN111626179B (en) * | 2020-05-24 | 2023-04-28 | 中国科学院心理研究所 | Micro-expression detection method based on optical flow superposition |
CN111781600A (en) * | 2020-06-18 | 2020-10-16 | 重庆工程职业技术学院 | Vehicle queuing length detection method suitable for signalized intersection scene |
CN111914627A (en) * | 2020-06-18 | 2020-11-10 | 广州杰赛科技股份有限公司 | Vehicle identification and tracking method and device |
CN111781600B (en) * | 2020-06-18 | 2023-05-30 | 重庆工程职业技术学院 | Vehicle queuing length detection method suitable for signalized intersection scene |
CN112200101B (en) * | 2020-10-15 | 2022-10-14 | 河南省交通规划设计研究院股份有限公司 | Video monitoring and analyzing method for maritime business based on artificial intelligence |
CN112200101A (en) * | 2020-10-15 | 2021-01-08 | 河南省交通规划设计研究院股份有限公司 | Video monitoring and analyzing method for maritime business based on artificial intelligence |
CN112597953A (en) * | 2020-12-28 | 2021-04-02 | 深圳市捷顺科技实业股份有限公司 | Method, device, equipment and medium for detecting pedestrians in channel gate area in video |
CN112597953B (en) * | 2020-12-28 | 2024-04-09 | 深圳市捷顺科技实业股份有限公司 | Method, device, equipment and medium for detecting passerby in passerby area in video |
WO2022198897A1 (en) * | 2021-03-23 | 2022-09-29 | 超级视线科技有限公司 | Management method and device for on-street parking |
CN113066306A (en) * | 2021-03-23 | 2021-07-02 | 超级视线科技有限公司 | Management method and device for roadside parking |
CN113705434A (en) * | 2021-08-27 | 2021-11-26 | 浙江新再灵科技股份有限公司 | Detection method and detection system for gas tank in straight ladder |
CN113793508A (en) * | 2021-09-27 | 2021-12-14 | 深圳市芊熠智能硬件有限公司 | Entrance and exit unlicensed vehicle anti-interference rapid detection method |
Also Published As
Publication number | Publication date |
---|---|
CN109684996B (en) | 2020-12-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109684996A (en) | Real-time vehicle based on video passes in and out recognition methods | |
Hu et al. | Moving object detection and tracking from video captured by moving camera | |
CN106875424B (en) | A kind of urban environment driving vehicle Activity recognition method based on machine vision | |
Wang et al. | Robust video-based surveillance by integrating target detection with tracking | |
Huang et al. | Feature-Based Vehicle Flow Analysis and Measurement for a Real-Time Traffic Surveillance System. | |
CN102609720B (en) | Pedestrian detection method based on position correction model | |
Liu et al. | A survey of vision-based vehicle detection and tracking techniques in ITS | |
Pätzold et al. | Counting people in crowded environments by fusion of shape and motion information | |
CN111932583A (en) | Space-time information integrated intelligent tracking method based on complex background | |
CN104978567B (en) | Vehicle checking method based on scene classification | |
Denman et al. | Multi-spectral fusion for surveillance systems | |
CN108229256A (en) | A kind of road construction detection method and device | |
Song et al. | Image-based traffic monitoring with shadow suppression | |
Xia et al. | Automatic multi-vehicle tracking using video cameras: An improved CAMShift approach | |
Babaei | Vehicles tracking and classification using traffic zones in a hybrid scheme for intersection traffic management by smart cameras | |
Hou et al. | Human detection and tracking over camera networks: A review | |
Chen et al. | Object tracking over a multiple-camera network | |
Niknejad et al. | Embedded multi-sensors objects detection and tracking for urban autonomous driving | |
Tourani et al. | Challenges of video-based vehicle detection and tracking in intelligent transportation systems | |
Shbib et al. | Distributed monitoring system based on weighted data fusing model | |
Parsola et al. | Automated system for road extraction and traffic volume estimation for traffic jam detection | |
Huang et al. | A vehicle flow counting system in rainy environment based on vehicle feature analysis. | |
Kapileswar et al. | Automatic traffic monitoring system using lane centre edges | |
Ran et al. | Multi moving people detection from binocular sequences | |
Yao et al. | Multi-Person Bayesian Tracking with Multiple Cameras. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |