CN109087510A - traffic monitoring method and device - Google Patents
traffic monitoring method and device Download PDFInfo
- Publication number
- CN109087510A CN109087510A CN201811157235.6A CN201811157235A CN109087510A CN 109087510 A CN109087510 A CN 109087510A CN 201811157235 A CN201811157235 A CN 201811157235A CN 109087510 A CN109087510 A CN 109087510A
- Authority
- CN
- China
- Prior art keywords
- image frame
- current image
- frame
- target
- specified
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0125—Traffic data processing
- G08G1/0133—Traffic data processing for classifying traffic situation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/052—Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Abstract
The embodiment of the present invention provides a kind of Traffic monitoring method and device, belongs to technical field of video processing.This method comprises: current image frame is input in vehicle detection model, the target detection frame in current image frame is exported;According to target detection frame, determine location information of the specified tracking target in current image frame, and the location information according to specified tracking target in current image frame, determine the average speed that all specified tracking targets are engraved when current image frame corresponds to, specified tracking target is the vehicle that specified tracking is engraved when current image frame corresponds to;According to average speed, judge whether traffic congestion occur in the corresponding traffic video of current image frame.The target detection frame stated in current image frame is identified since vehicle detection model inspection can be passed through, and vehicle detection model can be obtained based on various types of other sample image training, so as to overcome the interference of outside environmental elements, improve the accuracy rate of monitoring result.
Description
Technical field
The present embodiments relate to technical field of video processing more particularly to a kind of Traffic monitoring method and devices.
Background technique
With the development of computer technology, the intelligent transportation system based on video image is come into being.It is being based on video figure
In the intelligent transportation system of picture, traffic real-time video is the key that urban transportation research.The relevant technologies are when monitoring traffic, mainly
It is by foreground detection extraction or by carrying out feature point tracking analysis to video image, to determine in video image on road
Information of vehicles.Since two ways is influenced vulnerable to the different outside environmental elements such as illumination, shade, so as to cause monitoring result
Inaccuracy.
Summary of the invention
To solve the above-mentioned problems, the embodiment of the present invention provides one kind and overcomes the above problem or at least be partially solved
State the Traffic monitoring method and device of problem.
According to a first aspect of the embodiments of the present invention, a kind of Traffic monitoring method is provided, comprising:
Current image frame is input in vehicle detection model, the target detection frame in current image frame, target inspection are exported
Frame is surveyed to be used to indicate the vehicle in current image frame;
According to target detection frame, determine location information of the specified tracking target in current image frame, and according to it is specified with
It is flat to determine that all specified tracking targets are engraved when current image frame corresponds to for location information of the track target in current image frame
Equal speed, specified tracking target are the vehicle that specified tracking is engraved when current image frame corresponds to;
According to average speed, judge whether traffic congestion occur in the corresponding traffic video of current image frame.
Method provided in an embodiment of the present invention, by the way that current image frame to be input in vehicle detection model, output is current
Target detection frame in picture frame.According to target detection frame, location information of the specified tracking target in current image frame is determined,
And the location information according to specified tracking target in current image frame, determine all specified tracking targets in current image frame pair
At once the average speed engraved.According to average speed, judge that traffic whether occur in the corresponding traffic video of current image frame gathers around
It is stifled.The target detection frame stated in current image frame is identified since vehicle detection model inspection can be passed through, and vehicle detection model can
It is obtained based on various types of other sample image training, so as to overcome the interference of outside environmental elements, improves monitoring result
Accuracy rate.
According to a second aspect of the embodiments of the present invention, a kind of Traffic monitoring device is provided, comprising:
Output module exports the target in current image frame for current image frame to be input in vehicle detection model
Detection block, target detection frame are used to indicate the vehicle in current image frame;
First determining module, for determining position of the specified tracking target in current image frame according to target detection frame
Information, and the location information according to specified tracking target in current image frame, determine that all specified tracking targets are schemed currently
The average speed engraved when corresponding to as frame, specified tracking target are the vehicle that specified tracking is engraved when current image frame corresponds to;
Judgment module, for judging whether traffic occur in the corresponding traffic video of current image frame according to average speed
Congestion.
According to a third aspect of the embodiments of the present invention, a kind of electronic equipment is provided, comprising:
At least one processor;And
At least one processor being connect with processor communication, in which:
Memory is stored with the program instruction that can be executed by processor, and the instruction of processor caller is able to carry out first party
Traffic monitoring method provided by any possible implementation in the various possible implementations in face.
According to the fourth aspect of the invention, a kind of non-transient computer readable storage medium, non-transient computer are provided
Readable storage medium storing program for executing stores computer instruction, and computer instruction makes the various possible implementations of computer execution first aspect
In Traffic monitoring method provided by any possible implementation.
It should be understood that above general description and following detailed description be it is exemplary and explanatory, can not
Limit the embodiment of the present invention.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is this hair
Bright some embodiments for those of ordinary skill in the art without creative efforts, can be with root
Other attached drawings are obtained according to these attached drawings.
Fig. 1 is a kind of flow diagram of Traffic monitoring method provided in an embodiment of the present invention;
Fig. 2 is a kind of structural schematic diagram of deep neural network provided in an embodiment of the present invention;
Fig. 3 is a kind of flow diagram of Traffic monitoring method provided in an embodiment of the present invention;
Fig. 4 is a kind of flow diagram of Traffic monitoring method provided in an embodiment of the present invention;
Fig. 5 is a kind of flow diagram of Traffic monitoring method provided in an embodiment of the present invention;
Fig. 6 is a kind of flow diagram of Traffic monitoring method provided in an embodiment of the present invention;
Fig. 7 is a kind of flow diagram of Traffic monitoring method provided in an embodiment of the present invention;
Fig. 8 is a kind of structural schematic diagram of Traffic monitoring device provided in an embodiment of the present invention;
Fig. 9 is the block diagram of a kind of electronic equipment provided in an embodiment of the present invention.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention
In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is
A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art
Every other embodiment obtained without creative efforts, shall fall within the protection scope of the present invention.
Traffic monitoring method common at present mainly has three classes: first is that observation is manually monitored, second is that underground embedment sensing
Device detection method, third is that suspension type system detection method.Wherein, embedment sensor method in underground is examined using geomagnetism detecting and coil
The modes such as survey obtain traffic parameter.Suspension type detection method mainly includes ultrasonic wave, radar, infrared ray and video sequence detection method
Etc. several.Although manually monitoring method accuracy rate is higher, need to expend a large amount of manpower and material resources in implementation process, and cannot
Realize the intelligence of detection system.It is that friendship is obtained in the way of geomagnetism detecting and Coil Detector etc. that underground, which is embedded to sensor method,
Logical parameter.Although system stability is preferable, not vulnerable to the influence of external environment, there is also install inconvenience, satisfy the need simultaneously for it
The disadvantages of face damage is big.The methods of ultrasonic wave, radar, infrared ray in suspension type detection method detection function is single, detection accuracy
It is lower, real-time is poor etc..In comparison, video detecting method is with convenient for installation and maintenance, detection accuracy is high, not by extraneous ring
The advantages that border influences, real-time is good, thus be now subjected to and widely pay close attention to and apply.
In the intelligent transportation system based on video image, traffic real-time video is the key that urban transportation research.It is related
Technology is extracted by foreground detection when monitoring traffic, mainly or by carrying out feature point tracking analysis to video image,
To determine road vehicle information in video image.Due to two ways vulnerable to the different external environment such as illumination, shade because
Element influences, so as to cause monitoring result inaccuracy.
For said circumstances, the embodiment of the invention provides a kind of Traffic monitoring methods.It should be noted that this method
Application scenarios can be laid with camera for each section in advance, so that the real time monitoring video passed back according to camera, judgement are worked as
It is preceding whether traffic congestion to occur.Referring to Fig. 1, this method comprises:
101, current image frame is input in vehicle detection model, exports the target detection frame in current image frame, mesh
Mark detection block is used to indicate the vehicle in current image frame.
This step is mainly to pass through vehicle detection model quickly and accurately to identify that current image frame monitors the institute in region
There is vehicle, and determines the spatial information of vehicle.Wherein, target detection frame is for indicating the possible position of vehicle in current image frame
It sets.Center and the marginal point coordinate of frame can be used to be indicated for target detection frame, and the embodiment of the present invention does not limit this specifically
It is fixed.Correspondingly, the target detection frame in current image frame, the as center of output target detection frame and marginal point coordinate are exported,
It also is the spatial information of vehicle.Wherein, vehicle detection model can be specially deep neural network model, can also be with convolution mind
Through network model, the present invention is not especially limit this.
Content based on the above embodiment, as a kind of alternative embodiment, vehicle detection model can be based on having determined that mesh
The training of target sample image obtains.In addition, the sample image for training may include plurality of classes, it is including but not limited to different
Sample image under outside environmental elements, such as different light levels, it is subsequent using external world's ring when deep neural network to overcome
The influence of border factor.
By taking vehicle detection model is deep neural network model as an example, which is specifically as follows
YOLOv3 network model, the present invention is not especially limit this.Content based on the above embodiment, as a kind of optional
Embodiment, deep neural network model can be classified by feature extraction network, multi-scale prediction network and multi-tag predicts network
Composition.Wherein, feature extraction network can be made of a large amount of 3*3 and 1*1 convolutional layer, and between layers can be by residual
Shotcut connection in poor network, as shown in Figure 2.In Fig. 2, X is the output of upper layer activation layer, and F (X) is current layer convolution
The output of layer, activation primitive of the input of current layer convolutional layer between F (X)+X, relu expression layer.
YOLOv3 network model is predicted by the way of the fusion of multiple scales, to the detection recognition effect of Small object
It is relatively good.Wherein, the first prediction interval characteristic pattern size is N*N;First prediction interval characteristic pattern is up-sampled, by certain convolution
Layer forms the second prediction interval characteristic pattern that size is 2N*2N;Second prediction interval characteristic pattern is up-sampled, by certain convolution
Layer forms the third prediction interval characteristic pattern that size is 4N*4N.Each grid on each prediction interval characteristic pattern is responsible for prediction 3
Position, classification and the score of bounding box.Its loss function consists of three parts: error of coordinate, IOU (Intersection over
Union is handed over and is compared) error and error in classification, loss function can be indicated by following formula (1):
Wherein, W*W is characterized figure size, errorcoordFor error of coordinate, erroriouFor IOU error, errorclassPoint
Class error.Content based on the above embodiment, as a kind of alternative embodiment, the bounding box of each target of YOLOv3 network model
Class prediction no longer use common softmax classifier, and use several simple logistic regression classifiers.
Softmax classifier makes each bounding box one classification of corresponding distribution, the i.e. maximum classification of score.But bounding box may
There is the class label of overlapping, so that Softmax is not suitable for multi-tag classification.In addition, using several simple logistic regressions
Classifier, nicety of grading will not decline.It should be noted that the vehicle that target is tracked always in picture frame for before.
102, it according to target detection frame, determines location information of the specified tracking target in current image frame, specifies tracking
Target is the vehicle that specified tracking is engraved when current image frame corresponds to.
By above-mentioned steps 101 it is found that passing through center and the marginal point coordinate of the exportable target detection frame of vehicle detection model,
It also is location information of the target detection frame in current image frame.And these target detection frames, some be before in picture frame
The tracked corresponding vehicle of tracking target always, some may be the vehicle that the picture frame before current image frame does not occur
And emerging vehicle.And it is subsequent need to judge whether traffic video traffic congestion occurs, if the judgement base of deterministic process
Plinth is high-precision tracking target, then monitoring result also can be more accurate.Therefore, can according to target detection frame, determine it is specified with
Location information of the track target in current image frame.Wherein, specifying tracking target can be to engrave when current image frame corresponds to
The vehicle of specified tracking is also that the high-precision filtered out from tracking target tracks target.It is of course also possible to do not screen,
That is, specified track tracking target tracked always in picture frame before target is, the embodiment of the present invention does not make this to have
Body limits.
103, the location information according to specified tracking target in current image frame, determines that all specified tracking targets are being worked as
The average speed that preceding picture frame engraves when corresponding to.
For any image frame before current image frame, it can equally determine specified tracking target at this according to the above process
Location information on picture frame.Therefore, after obtaining location information of the specified tracking target in current image frame, in conjunction with specified
Track location information of the target on picture frame before, it may be determined that specified tracking target engraves flat when current image frame corresponds to
Equal speed.
104, according to average speed, judge whether traffic congestion occur in the corresponding traffic video of current image frame.
Since average speed reflects the speed that vehicle integrally travels, to can determine whether current image frame according to average speed
Whether there is traffic congestion in corresponding traffic video.It specifically, can be by the way that average speed be directly compared with preset threshold
Compared with if more than preset threshold, it is determined that traffic congestion occur.If being less than preset threshold, it is determined that do not occur traffic congestion.It needs
Illustrate, traffic video is by start image frame and to terminate the picture frame between picture frame and form (including start image
Frame and end picture frame), and average speed at least needs two frame data of front and back that can just be calculated.Illustrate in conjunction with this and above-mentioned
Content is it is found that the end picture frame of traffic video is to be located at subsequent current image frame, and the start image frame of traffic video
As calculate first frame that is used and being located at front when average speed.
Method provided in an embodiment of the present invention, by the way that current image frame to be input in vehicle detection model, output is current
Target detection frame in picture frame.According to target detection frame, location information of the specified tracking target in current image frame is determined,
And the location information according to specified tracking target in current image frame, determine all specified tracking targets in current image frame pair
At once the average speed engraved.According to average speed, judge that traffic whether occur in the corresponding traffic video of current image frame gathers around
It is stifled.The target detection frame stated in current image frame is identified since vehicle detection model inspection can be passed through, and vehicle detection model can
It is obtained based on various types of other sample image training, so as to overcome the interference of outside environmental elements, improves monitoring result
Accuracy rate.
By the content of above-described embodiment it is found that subsequent need to judge whether traffic video traffic congestion occurs, if judgement
The judgement basis of process is high-precision tracking target, then monitoring result also can be more accurate.Based on the above principles and above-mentioned reality
The content for applying example determines specified tracking target in current image frame as a kind of alternative embodiment according to target detection frame
Location information before, can also determine specified tracking target, the embodiment of the present invention is not to determining specified tracking mesh calibration method
Make specific limit.Referring to Fig. 3, this method includes but is not limited to:
301, for any tracking target, the start image frame that any tracking target is detected for the first time is determined, according to working as
Frame number between preceding picture frame and start image frame is poor, determines the tracking length of any tracking target;Wherein, tracking target be
The vehicle of tracking is engraved when current image frame corresponds to.
In order to make it easy to understand, the start image frame being detected for the first time with the tracking target is the 1 for any tracking target
Frame, current image frame is for the 10th frame.Due to being equivalent to and continuing to have tracked 10 frames from the 1st frame to the 10th frame, thus according to working as
Frame number between preceding picture frame and start image frame is poor, it may be determined that tracking length is 10.
302, the totalframes that any tracking target is detected from start image frame into current image frame, and conduct are counted
The detection weighted value of any tracking target.
In conjunction with above-mentioned example, if the corresponding vehicle of tracking target is vehicle i, and vehicle i is in addition in the 1st frame and the 10th frame
Except being detected, be also detected in the 2nd frame, the 3rd frame, the 6th frame and the 7th frame respectively, then vehicle i from start image frame to
The totalframes being detected in current image frame is 6.Correspondingly, vehicle i is 6 as the corresponding detection weighted value of tracking target.
If 303, the tracking length of any tracking target is greater than the detection weighted value of the first preset threshold, any tracking target
Greater than the second preset threshold and any tracking target is detected in current image frame, then using any tracking target as specified
Track target.
Wherein, the size of the first preset threshold and the second preset threshold can be configured according to demand, the embodiment of the present invention
This is not especially limited.It should be noted that each tracking target can be judged as procedure described above, this step is executed
Specified tracking target can be obtained from multiple tracking targets after the completion.It should be noted that by the above process from tracking target
The middle specified tracking target of screening, specifying the quantity of tracking target may be 0.
Method provided in an embodiment of the present invention determines what the tracking target was detected for the first time for any tracking target
Start image frame, it is poor according to the frame number between current image frame and start image frame, determine the tracking length of any tracking target.
Count the totalframes that the tracking target is detected from start image frame into current image frame, and the inspection as the tracking target
Survey weighted value.If the tracking length of the tracking target is greater than the first preset threshold, the detection weighted value of the tracking target is greater than the
The two preset thresholds and tracking target is detected in current image frame, then using the tracking target as specified tracking target.
Since high-precision specified tracking target can be screened from tracking target, so that the accuracy rate of monitoring result can be improved.
During Traffic monitoring, the vehicle occurred in first picture frame is continued as tracking target
Tracking, and vehicle that previous image frame occurs and as tracking target, the picture frame below is that occur again
, thus in combination with the location information and current image frame detection identification for tracking target in previous image frame during actual implementation
The location information of vehicle out determines location information of the specified tracking target in current image frame.In conjunction with above-mentioned principle and implementation
The content of example, as a kind of alternative embodiment, the embodiment of the present invention to according to target detection frame, does not determine that specified tracking target exists
The method of location information in current image frame makees specific limit.Referring to fig. 4, including but not limited to:
1021, the location information according to specified tracking target in the previous image frame of current image frame, determine it is specified with
Target prediction frame of the track target in current image frame.
X is used for location information of the specified tracking target in the previous image frame of current image framek-1For expression, then
When determining target prediction frame of the specified tracking target in preceding picture frame, it can determine that specific formula is such as by following formula (2)
Under:
X'k=AkXk-1 (2)
Wherein, X'kIndicate the target prediction frame of the specified tracking target in preceding picture frame, AkIndicate the specified tracking mesh
Mark corresponding state-transition matrix, Xk-1Indicate the location information of the specified tracking target in previous image frame.It needs to illustrate
It is to specify the possible more than one of tracking target, each specified tracking target can determine that as procedure described above it schemes currently
As the target prediction frame in frame.
1022, according to target prediction frame and target detection frame, position of the specified tracking target in current image frame is determined
Information.
Target prediction frame and target detection frame are being obtained, can determine that specified tracking target is being worked as by way of taking intersection
Location information in preceding picture frame, the present invention is not especially limit this.
Method provided in an embodiment of the present invention, through the specified tracking target of basis in the previous image frame of current image frame
Location information, determine target prediction frame of the specified tracking target in current image frame.It is examined according to target prediction frame and target
Frame is surveyed, determines location information of the specified tracking target in current image frame.Due in combination with tracking specified in previous image frame
Location information and the current image frame detection of target identify the location information of vehicle, determine specified tracking target in present image
Location information in frame, so that the accuracy rate of subsequent monitoring result can be improved.
Content based on the above embodiment, as a kind of alternative embodiment, the embodiment of the present invention is not to according to target prediction
Frame and target detection frame, the method for determining location information of the specified tracking target in current image frame make specific limit.Referring to
Fig. 5, this method include but is not limited to:
10221, for either objective prediction block, the friendship between calculating either objective prediction block and each target detection frame is simultaneously
Ratio.
In this step, the usual quantity of target prediction frame is not one, can calculate each target prediction frame and each target is examined
Survey the friendship between frame and ratio.For either objective prediction block, the target prediction frame and each target detection frame is being calculated
Between friendship and ratio after, can hand over from all and determine maximum hand over and ratio in ratio.
It should be noted that by above-mentioned steps it is found that needing to count the tracking target from starting for any tracking target
The totalframes that picture frame is detected into current image frame.That is, needing to determine whether the tracking target is detected in picture frame
It measures.For determining whether the tracking target is detected in current image frame, the embodiment of the present invention is to determine should be with
The mode whether track target is detected in current image frame specifically limits, including but not limited to: according to the tracking target
Location information in the previous image frame of current image frame determines the target prediction of the tracking target in current image frame
Frame;Current image frame is input in vehicle detection model, the target detection frame in current image frame is exported;It is pre- to calculate the target
Survey the friendship between frame and each target detection frame and ratio.If simultaneously ratio is default less than third for all maximum friendships handed in simultaneously ratio
Threshold value then can determine that the tracking target is detected in current image frame.In certain actual implementation, other sides can also be passed through
Formula determines whether tracking target is detected in picture frame, such as determines whether tracking target is detected by image recognition,
The present invention is not especially limit this.
It should also be noted that, for first picture frame, by the way that first picture frame is input to vehicle detection model
In, exportable multiple target detection frames, and thus can determine multiple tracking targets.At the same time, multiple tracking target
Think to be detected in first picture frame.
If 10222, simultaneously ratio is less than third predetermined threshold value for all maximum friendships handed in simultaneously ratio, either objective is predicted
Location information of the frame as the corresponding specified tracking target of either objective prediction block in current image frame, if maximum hand over simultaneously ratio
Not less than third predetermined threshold value, then according to either objective prediction block and maximum friendship and the corresponding target detection frame of ratio, determines and appoint
Location information of the corresponding specified tracking target of one target prediction frame in current image frame.
With the location information X in current image framekFor expression, if simultaneously ratio is less than third predetermined threshold value for maximum friendship,
Then in combination with above-mentioned formula (2), directly by X'kIt is assigned to Xk.That is, for either objective prediction block, it can be by the target prediction frame
As location information of the corresponding specified tracking target of the target prediction frame in current image frame.
If maximum is handed over and ratio is not less than third predetermined threshold value, it can refer to following formula (3) and determine the target prediction frame
Location information of the corresponding specified tracking target in current image frame, specific as follows:
Xk=X'k+Qk(Zk-RkX'k) (3)
Wherein, X'kIndicate the target prediction frame, QkIndicate the Kalman filtering gain coefficient of current image frame, ZkIt indicates most
It is big to hand over the simultaneously corresponding target detection frame of ratio, RkFor observing matrix.
QkIt can be calculated by following formula (4), specific as follows:
Wherein, P'kIndicate the corresponding specified tracking target of the target prediction frame in predicted state XkWhen covariance predict square
Battle array, RkIndicate observing matrix,Indicate RkTransposed matrix.SkFor observation noise covariance matrix, and obey standard normal point
Cloth.
P'kIt can be calculated by following formula (5), specific as follows:
Wherein, P'kIndicate the corresponding specified tracking target of the target prediction frame in predicted state XkWhen covariance predict square
Battle array, AkIndicate the state-transition matrix of the corresponding specified tracking target of the target prediction frame,For AkTransposed matrix.Bk-1For
Dynamic noise covariance matrix, and obey standardized normal distribution.
Method provided in an embodiment of the present invention, for either objective prediction block, by calculate the target prediction frame with it is each
Friendship and ratio between target detection frame.If simultaneously ratio is less than third predetermined threshold value for all maximum friendships handed in simultaneously ratio, by
Location information of the target prediction frame as the corresponding specified tracking target of the target prediction frame in current image frame, if maximum
Simultaneously ratio is not less than third predetermined threshold value for friendship, then hands over the simultaneously corresponding target detection frame of ratio according to the target prediction frame and maximum,
Determine location information of the corresponding specified tracking target of the target prediction frame in current image frame.Due to that can be based on handing over and compare
Value is divided into different situations to calculate corresponding specified location information of the tracking target in current image frame of target prediction frame, from
And the accuracy rate of subsequent monitoring result can be improved.
Content based on the above embodiment, as a kind of alternative embodiment, the embodiment of the present invention is not to according to specified tracking
Location information of the target in current image frame determines all specified being averaged of engraving when current image frame corresponds to of tracking targets
The method of speed makees specific limit.Referring to Fig. 6, this method includes but is not limited to:
1031, according to the corresponding detection weighted value of each specified tracking target of institute and the 4th preset threshold, each finger is determined
Surely the corresponding specified start image frame of tracking target.
Any specified tracking target is detected for the first time with the specified tracking target in conjunction with the example of above-described embodiment
Start image frame be the 1st frame, current image frame be the 10th frame for.Due to being equivalent to lasting tracking from the 1st frame to the 10th frame
10 frames, thus poor according to the frame number between current image frame and start image frame, it may be determined that tracking length is 10.If it is specified with
The corresponding vehicle of track target is vehicle i, and vehicle i is in the 1st frame and the 10th frame other than being detected, also respectively the 2nd
It is detected in frame, the 3rd frame, the 6th frame and the 7th frame, then vehicle i is detected from start image frame into current image frame total
Frame number is 6.Correspondingly, vehicle i is 6 as the corresponding detection weighted value of tracking target.
After obtaining detection weighted value, weighted value can be will test and the 4th preset threshold makees difference, thus will between the two
Difference as new detection weighted value, and according to new detection weighted value, determine new tracking length.Finally, according to new
Tracking length can determine that this specifies the corresponding specified start image frame of tracking target.By taking the 4th preset threshold is 2 as an example, in conjunction with upper
Example is stated, then new detection weighted value is (6-2)=4.And vehicle i is just reached for 4 in 6 frame as specified tracking target
Weighted value is detected, so that new tracking length is 6, and the corresponding specified starting of tracking target can be specified using the 6th frame as this
Picture frame.
It should be noted that can determine the corresponding specified start image frame of each specified tracking target by the above process,
The corresponding specified start image frame of each specified tracking target may be not identical.It should also be noted that, for present image
Picture frame before frame specifies location information of the tracking target in current image frame similarly with calculating, and specified tracking target exists
Location information in these picture frames can be calculated also according to method provided by the above embodiment, and details are not described herein again.
1032, existed according to location information and each specified tracking target of each specified tracking target in current image frame
Each specified location information tracked in the corresponding specified start image frame of target, determines that all specified tracking targets are schemed currently
The average speed engraved when being corresponded to as frame.
The embodiment of the present invention is not to the location information and each finger according to each specified tracking target in current image frame
Surely location information of the tracking target in corresponding specified start image frame, determines all specified tracking targets in current image frame
The mode of the average speed engraved when corresponding specifically limits, including but not limited to following process:
(1) by perspective transformation matrix, respectively the location information to each specified tracking target in current image frame and
Each specified tracking target is in each specified location information progress coordinate change tracked in the corresponding specified start image frame of target
It changes, obtains actual position information and each specified tracking target of each specified tracking target in current image frame in correspondence
Specified start image frame in actual position information.
Since camera is two dimension when shooting video, thereby increases and it is possible to any angle is in, so that any two points are in video figure
As the distance in frame is not the actual distance of the two.Therefore, perspective transformation matrix can be obtained before executing step (1).Tool
Body can calculate in first picture frame of video perspective transformation matrix after camera is fixed, and by perspective matrix value
It saves, so that the perspective transformation matrix of preservation can be used in the subsequent image frames in the video.It should be noted that
During actual implementation, it can be not limited to calculate perspective transformation matrix in first picture frame, it such as can be in second picture frame
Or perspective transformation matrix is calculated in subsequent image frames.
Wherein, perspective transformation matrices M can refer to following formula (6):
For any specified tracking target, with the location information of the specified tracking target in current image frame for (cx,cy)
For, then this specifies the actual position information (c_t for tracking targetx,c_ty) can be calculated by following formula (7), it is specific as follows:
It should be noted that since specified tracking target is to be indicated in current image frame with preset, thus should
Location information of the specified tracking target in current image frame can be the centre coordinate of frame, and the embodiment of the present invention does not make this
It is specific to limit.Similarly, this can be calculated according to above-mentioned formula specifies tracking target corresponding specified in the specified tracking target
Actual position information in start image frame.
(2) actual position information and each specified tracking mesh according to each specified tracking target in current image frame
It is corresponding in current image frame to calculate all specified tracking targets for the actual position information being marked in corresponding specified start image frame
When the average displacement that engraves.
By taking j-th of specified tracking target as an example, based on the above-mentioned actual position information being calculated, it can calculate between the two
Distance and be calculated as dj, and the frame between current image frame specified start image frame corresponding with this j-th specified tracking target
Number difference can be calculated as len1j-len2j.Correspondingly, the average displacement that all specified tracking targets are engraved when current image frame corresponds to
It can be calculated by following formula (8), specific as follows:
Wherein, d_mean indicates the average displacement that all specified tracking targets are engraved when current image frame corresponds to, ot_
Real indicates the total quantity of specified tracking target.
(3) according to the corresponding average displacement of all specified tracking targets and frame rate, all specified tracking targets are calculated
The average speed engraved when current image frame corresponds to.
As can be seen from the above embodiments, the total quantity for specifying tracking target may be 0.When the total quantity of specified tracking target is
It, then can be using preset constant value as average speed when 0.Wherein, which can take biggish numerical value.When specified tracking mesh
Target total quantity is greater than 0, then calculates what all specified tracking targets were engraved when current image frame corresponds to by following formula (9)
Average speed, specific as follows:
Wherein, speed_mean indicates the average speed that all specified tracking targets are engraved when current image frame corresponds to,
Fps indicates frame rate.
Method provided in an embodiment of the present invention, by according to the corresponding detection weighted value of each specified tracking target and the 4th
Preset threshold determines the corresponding specified start image frame of each specified tracking target.According to each specified tracking target current
Location information and each specified tracking target in picture frame are in the corresponding specified start image frame of each specified tracking target
Location information, determine all specified average speeds for engraving when current image frame corresponds to of tracking targets.Due to it is subsequent can be according to
The average speed engraved according to all specified tracking targets when current image frame corresponds to, it is determined whether traffic congestion occurs, thus
The accuracy rate of monitoring result can be improved.
Content based on the above embodiment, as a kind of alternative embodiment, the embodiment of the present invention not to according to average speed,
Judge that the method for whether occurring traffic congestion in the corresponding traffic video of current image frame makees specific limit.Referring to Fig. 7, including but
It is not limited to:
1041, determination judges start image frame to the first totalframes between current image frame, Statistic analysis start image
Second totalframes of the frame to average speed between current image frame less than the 5th preset threshold.
The method provided through the foregoing embodiment, can be calculated all specified tracking targets each picture frame to it is corresponding when
The average speed namely each picture frame engraved can correspond to an average speed.Therefore, statistics available in this step to judge
Second totalframes of the beginning picture frame to average speed between current image frame less than the 5th preset threshold.
If 1042, the ratio between the second totalframes and the first totalframes is greater than the 6th preset threshold, it is determined that traffic view
Occurs traffic congestion in frequency, traffic video is from judging that start image frame to the picture frame between current image frame is formed.
Wherein, traffic video includes judging start image frame and current image frame.What is be related in above-described embodiment is default
The size of threshold value can be configured according to actual needs, and the present invention is not especially limit this.For example, with judgement
Start image frame is the 1st frame, and current image frame is for the 10th frame, then the first totalframes is 10.And in this 10 frame altogether
There is the corresponding average speed of 4 frames less than the 5th preset threshold, then the second totalframes is 4, and ratio between the two is 0.4.If
0.4 is greater than the 6th preset threshold, then can determine occur traffic congestion in traffic video.
Method provided in an embodiment of the present invention judges that start image frame is total to first between current image frame by determining
Frame number, second totalframes of the Statistic analysis start image frame to average speed between current image frame less than the 5th preset threshold.
If the ratio between the second totalframes and the first totalframes is greater than the 6th preset threshold, it is determined that occur traffic in traffic video and gather around
Stifled, traffic video is from judging that start image frame to the picture frame between current image frame is formed.Due to can be according to all fingers
Surely the average speed that tracking target is engraved when current image frame corresponds to, it is determined whether traffic congestion occurs, so that prison can be improved
Survey the accuracy rate of result.
It should be noted that above-mentioned all alternative embodiments, can form optional implementation of the invention using any combination
Example, this is no longer going to repeat them.
Content based on the above embodiment, the embodiment of the invention provides a kind of Traffic monitoring device, the device is for holding
Traffic monitoring method in row above method embodiment.Referring to Fig. 8, the device include: output module 801, determining module 802 and
Judgment module 803;Wherein,
Output module 801 exports the mesh in current image frame for current image frame to be input in deep neural network
Detection block is marked, target detection frame is used to indicate the vehicle in current image frame;
First determining module 802, for determining position of the specified tracking target in current image frame according to target detection frame
Confidence breath, and the location information according to specified tracking target in current image frame, determine all specified tracking targets current
The average speed engraved when picture frame corresponds to, specified tracking target are the vehicle that specified tracking is engraved when current image frame corresponds to
?;
Judgment module 803, for judging whether hand in the corresponding traffic video of current image frame according to average speed
Logical congestion.
Content based on the above embodiment, as a kind of alternative embodiment, the device further include:
Second determining module, for determining the starting that any tracking target is detected for the first time for any tracking target
Picture frame, it is poor according to the frame number between current image frame and start image frame, determine the tracking length of any tracking target;Its
In, tracking target is the vehicle that tracking is engraved when current image frame corresponds to;
Statistical module, the total frame being detected from start image frame into current image frame for counting any tracking target
Number, and the detection weighted value as any tracking target;
Third determining module, for being greater than the first preset threshold, any tracking mesh when the tracking length of any tracking target
Target detection weighted value, then will be any greater than the second preset threshold and when any tracking target is detected in current image frame
Target is tracked as specified tracking target.
Content based on the above embodiment, as a kind of alternative embodiment, the first determining module 802, comprising: first determines
Unit and the second determination unit;
First determination unit, for the position letter according to specified tracking target in the previous image frame of current image frame
Breath determines target prediction frame of the specified tracking target in current image frame;
Second determination unit, for determining that specified tracking target is schemed currently according to target prediction frame and target detection frame
As the location information in frame.
Content based on the above embodiment, as a kind of alternative embodiment, the second determination unit, for for either objective
Prediction block calculates the friendship between either objective prediction block and each target detection frame and ratio;If all hand in simultaneously ratio most
Simultaneously ratio is less than third predetermined threshold value for big friendship, then using either objective prediction block as the corresponding specified tracking of either objective prediction block
Location information of the target in current image frame, if simultaneously ratio is not less than third predetermined threshold value for maximum friendship, according to either objective
Prediction block and maximum friendship and the corresponding target detection frame of ratio, determine that the corresponding specified tracking target of either objective prediction block is being worked as
Location information in preceding picture frame.
Content based on the above embodiment, as a kind of alternative embodiment, the first determining module 802, further includes: third is true
Order member and the 4th determination unit;
Third determination unit, for according to the corresponding detection weighted value of each specified tracking target of institute and the 4th default threshold
Value, determines the corresponding specified start image frame of each specified tracking target;
4th determination unit, for the location information and each finger according to each specified tracking target in current image frame
Surely tracking target determines all specified tracking in each specified location information tracked in the corresponding specified start image frame of target
The average speed that target is engraved when current image frame corresponds to.
Content based on the above embodiment, as a kind of alternative embodiment, the 4th determination unit, for passing through perspective transform
Matrix, the location information and each specified tracking target to each specified tracking target in current image frame are in each finger respectively
Surely the location information in the corresponding specified start image frame of tracking target is coordinately transformed, and is obtained each specified tracking target and is existed
Actual position information and each specified tracking target in current image frame is true in corresponding specified start image frame
Location information;According to actual position information and each specified tracking target of each specified tracking target in current image frame
Actual position information in corresponding specified start image frame, calculate all specified tracking targets when current image frame is to correspondence
The average displacement engraved;According to the corresponding average displacement of all specified tracking targets and frame rate, all specified tracking are calculated
The average speed that target is engraved when current image frame corresponds to.
Content based on the above embodiment, as a kind of alternative embodiment, judgment module 803, for determining judgement starting
Picture frame is to the first totalframes between current image frame, Statistic analysis start image frame to average speed between current image frame
Less than the second totalframes of the 5th preset threshold;If the ratio between the second totalframes and the first totalframes is greater than the 6th default threshold
Value, it is determined that occur traffic congestion in traffic video, traffic video is from judging start image frame between current image frame
Picture frame is formed.
Device provided in an embodiment of the present invention, by the way that current image frame to be input in vehicle detection model, output is current
Target detection frame in picture frame.According to target detection frame, location information of the specified tracking target in current image frame is determined,
And the location information according to specified tracking target in current image frame, determine all specified tracking targets in current image frame pair
At once the average speed engraved.According to average speed, judge that traffic whether occur in the corresponding traffic video of current image frame gathers around
It is stifled.The target detection frame stated in current image frame is identified since vehicle detection model inspection can be passed through, and vehicle detection model can
It is obtained based on various types of other sample image training, so as to overcome the interference of outside environmental elements, improves monitoring result
Accuracy rate.
Fig. 9 illustrates the entity structure schematic diagram of a kind of electronic equipment, as shown in figure 9, the electronic equipment may include: place
Manage device (processor) 910, communication interface (Communications Interface) 920,930 He of memory (memory)
Communication bus 940, wherein processor 910, communication interface 920, memory 930 complete mutual lead to by communication bus 940
Letter.Processor 910 can call the logical order in memory 930, to execute following method: current image frame is input to vehicle
In detection model, the target detection frame in current image frame is exported, target detection frame is used to indicate the vehicle in current image frame
?;According to target detection frame, location information of the specified tracking target in current image frame is determined, and according to specified tracking target
Location information in current image frame determines the average speed that all specified tracking targets are engraved when current image frame corresponds to
Degree, specified tracking target are the vehicle that specified tracking is engraved when current image frame corresponds to;According to average speed, current figure is judged
As whether there is traffic congestion in the corresponding traffic video of frame.
In addition, the logical order in above-mentioned memory 930 can be realized by way of SFU software functional unit and conduct
Independent product when selling or using, can store in a computer readable storage medium.Based on this understanding, originally
Substantially the part of the part that contributes to existing technology or the technical solution can be in other words for the technical solution of invention
The form of software product embodies, which is stored in a storage medium, including some instructions to
So that a computer equipment (can be personal computer, electronic equipment or the network equipment etc.) executes each reality of the present invention
Apply all or part of the steps of a method.And storage medium above-mentioned include: USB flash disk, mobile hard disk, read-only memory (ROM,
Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic or disk etc. it is various
It can store the medium of program code.
The embodiment of the present invention also provides a kind of non-transient computer readable storage medium, is stored thereon with computer program,
The computer program is implemented to carry out the various embodiments described above offer method when being executed by processor, for example, will currently scheme
As frame is input in vehicle detection model, the target detection frame in current image frame is exported, target detection frame is current for indicating
Vehicle in picture frame;According to target detection frame, location information of the determining specified tracking target in current image frame, and according to
Location information of the specified tracking target in current image frame, determines that all specified tracking targets correspond to the moment in current image frame
On average speed, specified tracking target be that the vehicle specified and tracked is engraved when current image frame correspond to;According to average speed,
Judge whether traffic congestion occur in the corresponding traffic video of current image frame.
The apparatus embodiments described above are merely exemplary, wherein described, unit can as illustrated by the separation member
It is physically separated with being or may not be, component shown as a unit may or may not be physics list
Member, it can it is in one place, or may be distributed over multiple network units.It can be selected according to the actual needs
In some or all of the modules achieve the purpose of the solution of this embodiment.Those of ordinary skill in the art are not paying creativeness
Labour in the case where, it can understand and implement.
Through the above description of the embodiments, those skilled in the art can be understood that each embodiment can
It realizes by means of software and necessary general hardware platform, naturally it is also possible to pass through hardware.Based on this understanding, on
Stating technical solution, substantially the part that contributes to existing technology can be embodied in the form of software products in other words, should
Computer software product may be stored in a computer readable storage medium, such as ROM/RAM, magnetic disk, CD, including several fingers
It enables and using so that a computer equipment (can be personal computer, server or the network equipment etc.) executes each implementation
Method described in certain parts of example or embodiment.
Finally, it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although
Present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: it still may be used
To modify the technical solutions described in the foregoing embodiments or equivalent replacement of some of the technical features;
And these are modified or replaceed, technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution spirit and
Range.
Claims (10)
1. a kind of Traffic monitoring method characterized by comprising
Current image frame is input in vehicle detection model, the target detection frame in the current image frame, the mesh are exported
Mark detection block is used to indicate the vehicle in the current image frame;
According to the target detection frame, location information of the specified tracking target in the current image frame is determined, and according to institute
Location information of the specified tracking target in the current image frame is stated, determines all specified tracking targets in the present image
The average speed engraved when frame corresponds to, the specified tracking target is that specified tracking is engraved when the current image frame corresponds to
Vehicle;
According to the average speed, judge whether traffic congestion occur in the corresponding traffic video of the current image frame.
2. the method according to claim 1, wherein described according to the target detection frame, determining specified tracking
Target is before the location information in the current image frame, further includes:
For any tracking target, determines the start image frame that any tracking target is detected for the first time, worked as according to described
Frame number between preceding picture frame and the start image frame is poor, determines the tracking length of any tracking target;Wherein, it tracks
Target is the vehicle that tracking is engraved when the current image frame corresponds to;
The totalframes that any tracking target is detected from the start image frame into the current image frame is counted, and
Detection weighted value as any tracking target;
If the tracking length of any tracking target is greater than the detection weighted value of the first preset threshold, any tracking target
Greater than the second preset threshold and any tracking target is detected in the current image frame, then by any tracking
Target is as specified tracking target.
3. method according to claim 1 or 2, which is characterized in that it is described according to the target detection frame, determine it is specified with
Location information of the track target in the current image frame, comprising:
According to location information of the specified tracking target in the previous image frame of the current image frame, determine described specified
Track target prediction frame of the target in current image frame;
According to the target prediction frame and the target detection frame, position of the specified tracking target in current image frame is determined
Confidence breath.
4. according to the method described in claim 3, it is characterized in that, described according to the target prediction frame and the target detection
Frame determines location information of the specified tracking target in current image frame, comprising:
For either objective prediction block, the friendship between the either objective prediction block and each target detection frame and ratio are calculated;
Handed over and maximum in ratio is handed over and ratio is less than third predetermined threshold value if all, using the either objective prediction block as
Location information of the corresponding specified tracking target of the either objective prediction block in the current image frame maximum is handed over if described
And ratio is not less than third predetermined threshold value, then according to the either objective prediction block and the maximum friendship and the corresponding target of ratio
Detection block determines location information of the corresponding specified tracking target of the either objective prediction block in the current image frame.
5. according to the method described in claim 2, it is characterized in that, it is described according to the specified tracking target in the current figure
As the location information in frame, the average speed that all specified tracking targets are engraved when the current image frame corresponds to, packet are determined
It includes:
According to the corresponding detection weighted value of each specified tracking target of institute and the 4th preset threshold, each specified tracking target is determined
Corresponding specified start image frame;
According to location information and each specified tracking target of each specified tracking target in the current image frame each
The specified location information tracked in the corresponding specified start image frame of target, determines all specified tracking targets in the current figure
The average speed engraved when being corresponded to as frame.
6. according to the method described in claim 5, it is characterized in that, it is described according to each specified tracking target in the current figure
As in frame location information and each specified tracking target in the corresponding specified start image frame of each specified tracking target
Location information determines the average speed that all specified tracking targets are engraved when the current image frame corresponds to, comprising:
By perspective transformation matrix, respectively to each specified location information of the tracking target in the current image frame and each
Location information of the specified tracking target in the corresponding specified start image frame of each specified tracking target is coordinately transformed, and is obtained
To actual position information and each specified tracking target of each specified tracking target in the current image frame in correspondence
Specified start image frame in actual position information;
According to actual position information and each specified tracking target of each specified tracking target in the current image frame
Actual position information in corresponding specified start image frame calculates all specified tracking targets in the current image frame pair
At once the average displacement engraved;
According to the corresponding average displacement of all specified tracking targets and frame rate, calculates all specified tracking targets and work as described
The average speed that preceding picture frame engraves when corresponding to.
7. judging the current figure the method according to claim 1, wherein described according to the average speed
As whether there is traffic congestion in the corresponding traffic video of frame, comprising:
It determines and judges that start image frame to the first totalframes between the current image frame, counts the judgement start image frame
Second totalframes of the average speed less than the 5th preset threshold between to the current image frame;
If the ratio between second totalframes and first totalframes is greater than the 6th preset threshold, it is determined that the traffic
Occurs traffic congestion in video, the traffic video is from the judgement start image frame to the figure between the current image frame
As frame is formed.
8. a kind of Traffic monitoring device characterized by comprising
Output module exports the target in the current image frame for current image frame to be input in vehicle detection model
Detection block, the target detection frame are used to indicate the vehicle in the current image frame;
First determining module, for determining specified tracking target in the current image frame according to the target detection frame
Location information, and the location information according to the specified tracking target in the current image frame, determine all specified tracking
The average speed that target is engraved when the current image frame corresponds to, the specified tracking target are in the current image frame pair
At once the vehicle of specified tracking is engraved;
Judgment module, for judging whether occur in the corresponding traffic video of the current image frame according to the average speed
Traffic congestion.
9. a kind of electronic equipment characterized by comprising
At least one processor;And
At least one processor being connect with the processor communication, in which:
The memory is stored with the program instruction that can be executed by the processor, and the processor calls described program to instruct energy
Enough methods executed as described in claim 1 to 7 is any.
10. a kind of non-transient computer readable storage medium, which is characterized in that the non-transient computer readable storage medium is deposited
Computer instruction is stored up, the computer instruction makes the computer execute the method as described in claim 1 to 7 is any.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811157235.6A CN109087510B (en) | 2018-09-29 | 2018-09-29 | Traffic monitoring method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811157235.6A CN109087510B (en) | 2018-09-29 | 2018-09-29 | Traffic monitoring method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109087510A true CN109087510A (en) | 2018-12-25 |
CN109087510B CN109087510B (en) | 2021-09-07 |
Family
ID=64843136
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811157235.6A Active CN109087510B (en) | 2018-09-29 | 2018-09-29 | Traffic monitoring method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109087510B (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110047276A (en) * | 2019-03-11 | 2019-07-23 | 广州文远知行科技有限公司 | The congestion status of barrier vehicle determines method, apparatus and Related product |
CN110188596A (en) * | 2019-01-04 | 2019-08-30 | 北京大学 | Monitor video pedestrian real-time detection, Attribute Recognition and tracking and system based on deep learning |
CN110298307A (en) * | 2019-06-27 | 2019-10-01 | 浙江工业大学 | A kind of exception parking real-time detection method based on deep learning |
CN110634153A (en) * | 2019-09-19 | 2019-12-31 | 上海眼控科技股份有限公司 | Target tracking template updating method and device, computer equipment and storage medium |
CN110889427A (en) * | 2019-10-15 | 2020-03-17 | 同济大学 | Congestion traffic flow traceability analysis method |
CN110942642A (en) * | 2019-11-20 | 2020-03-31 | 中科视元科技(杭州)有限公司 | Video-based traffic slow-driving detection method and system |
CN110992693A (en) * | 2019-12-04 | 2020-04-10 | 浙江工业大学 | Deep learning-based traffic congestion degree multi-dimensional analysis method |
CN111292353A (en) * | 2020-01-21 | 2020-06-16 | 成都恒创新星科技有限公司 | Parking state change identification method |
CN111583669A (en) * | 2019-02-15 | 2020-08-25 | 杭州海康威视数字技术股份有限公司 | Overspeed detection method, overspeed detection device, control equipment and storage medium |
CN111723747A (en) * | 2020-06-22 | 2020-09-29 | 西安工业大学 | Lightweight high-efficiency target detection method applied to embedded platform |
CN112053572A (en) * | 2020-09-07 | 2020-12-08 | 重庆同枥信息技术有限公司 | Vehicle speed measuring method, device and system based on video and distance grid calibration |
CN112132071A (en) * | 2020-09-27 | 2020-12-25 | 上海眼控科技股份有限公司 | Processing method, device and equipment for identifying traffic jam and storage medium |
CN112163471A (en) * | 2020-09-14 | 2021-01-01 | 深圳市诺龙技术股份有限公司 | Congestion detection method and device |
CN112232257A (en) * | 2020-10-26 | 2021-01-15 | 青岛海信网络科技股份有限公司 | Traffic abnormity determining method, device, equipment and medium |
CN112444311A (en) * | 2020-11-22 | 2021-03-05 | 同济大学 | Method for monitoring space-time load of bridge vehicle |
CN113850995A (en) * | 2021-09-14 | 2021-12-28 | 华设设计集团股份有限公司 | Event detection method, device and system based on tunnel radar vision data fusion |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102073853A (en) * | 2011-01-14 | 2011-05-25 | 华南理工大学 | Method for tracking multi-target vehicles by adopting MCMC (Markov Chain Monte Carlo) algorithm |
CN103871079A (en) * | 2014-03-18 | 2014-06-18 | 南京金智视讯技术有限公司 | Vehicle tracking method based on machine learning and optical flow |
CN103985252A (en) * | 2014-05-23 | 2014-08-13 | 江苏友上科技实业有限公司 | Multi-vehicle projection locating method based on time domain information of tracked object |
US20150054957A1 (en) * | 2013-08-23 | 2015-02-26 | Xerox Corporation | System and method for automated sequencing of vehicle under low speed conditions from video |
CN104732187A (en) * | 2013-12-18 | 2015-06-24 | 杭州华为企业通信技术有限公司 | Method and equipment for image tracking processing |
CN105513354A (en) * | 2015-12-22 | 2016-04-20 | 电子科技大学 | Video-based urban road traffic jam detecting system |
CN106056926A (en) * | 2016-07-18 | 2016-10-26 | 华南理工大学 | Video vehicle speed detection method based on dynamic virtual coil |
CN106204650A (en) * | 2016-07-11 | 2016-12-07 | 北京航空航天大学 | A kind of vehicle target tracking based on vacant lot video corresponding technology |
CN107066931A (en) * | 2017-01-12 | 2017-08-18 | 张家港全智电子科技有限公司 | A kind of target trajectory tracking based on monitor video |
CN107274433A (en) * | 2017-06-21 | 2017-10-20 | 吉林大学 | Method for tracking target, device and storage medium based on deep learning |
CN107644528A (en) * | 2017-08-02 | 2018-01-30 | 浙江工业大学 | A kind of vehicle queue length detection method based on vehicle tracking |
CN107798870A (en) * | 2017-10-25 | 2018-03-13 | 清华大学 | A kind of the flight path management method and system, vehicle of more vehicle target tracking |
CN108470332A (en) * | 2018-01-24 | 2018-08-31 | 博云视觉(北京)科技有限公司 | A kind of multi-object tracking method and device |
-
2018
- 2018-09-29 CN CN201811157235.6A patent/CN109087510B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102073853A (en) * | 2011-01-14 | 2011-05-25 | 华南理工大学 | Method for tracking multi-target vehicles by adopting MCMC (Markov Chain Monte Carlo) algorithm |
US20150054957A1 (en) * | 2013-08-23 | 2015-02-26 | Xerox Corporation | System and method for automated sequencing of vehicle under low speed conditions from video |
CN104732187A (en) * | 2013-12-18 | 2015-06-24 | 杭州华为企业通信技术有限公司 | Method and equipment for image tracking processing |
CN103871079A (en) * | 2014-03-18 | 2014-06-18 | 南京金智视讯技术有限公司 | Vehicle tracking method based on machine learning and optical flow |
CN103985252A (en) * | 2014-05-23 | 2014-08-13 | 江苏友上科技实业有限公司 | Multi-vehicle projection locating method based on time domain information of tracked object |
CN105513354A (en) * | 2015-12-22 | 2016-04-20 | 电子科技大学 | Video-based urban road traffic jam detecting system |
CN106204650A (en) * | 2016-07-11 | 2016-12-07 | 北京航空航天大学 | A kind of vehicle target tracking based on vacant lot video corresponding technology |
CN106056926A (en) * | 2016-07-18 | 2016-10-26 | 华南理工大学 | Video vehicle speed detection method based on dynamic virtual coil |
CN107066931A (en) * | 2017-01-12 | 2017-08-18 | 张家港全智电子科技有限公司 | A kind of target trajectory tracking based on monitor video |
CN107274433A (en) * | 2017-06-21 | 2017-10-20 | 吉林大学 | Method for tracking target, device and storage medium based on deep learning |
CN107644528A (en) * | 2017-08-02 | 2018-01-30 | 浙江工业大学 | A kind of vehicle queue length detection method based on vehicle tracking |
CN107798870A (en) * | 2017-10-25 | 2018-03-13 | 清华大学 | A kind of the flight path management method and system, vehicle of more vehicle target tracking |
CN108470332A (en) * | 2018-01-24 | 2018-08-31 | 博云视觉(北京)科技有限公司 | A kind of multi-object tracking method and device |
Non-Patent Citations (2)
Title |
---|
付维娜: "《拓扑同构与视频目标跟踪》", 31 May 2018 * |
杨虎: "《城市交通缓堵治理及对策》", 31 August 2016 * |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110188596A (en) * | 2019-01-04 | 2019-08-30 | 北京大学 | Monitor video pedestrian real-time detection, Attribute Recognition and tracking and system based on deep learning |
CN111583669A (en) * | 2019-02-15 | 2020-08-25 | 杭州海康威视数字技术股份有限公司 | Overspeed detection method, overspeed detection device, control equipment and storage medium |
CN110047276B (en) * | 2019-03-11 | 2020-11-27 | 广州文远知行科技有限公司 | Method and device for determining congestion state of obstacle vehicle and related product |
CN110047276A (en) * | 2019-03-11 | 2019-07-23 | 广州文远知行科技有限公司 | The congestion status of barrier vehicle determines method, apparatus and Related product |
CN110298307A (en) * | 2019-06-27 | 2019-10-01 | 浙江工业大学 | A kind of exception parking real-time detection method based on deep learning |
CN110298307B (en) * | 2019-06-27 | 2021-07-20 | 浙江工业大学 | Abnormal parking real-time detection method based on deep learning |
CN110634153A (en) * | 2019-09-19 | 2019-12-31 | 上海眼控科技股份有限公司 | Target tracking template updating method and device, computer equipment and storage medium |
CN110889427A (en) * | 2019-10-15 | 2020-03-17 | 同济大学 | Congestion traffic flow traceability analysis method |
CN110889427B (en) * | 2019-10-15 | 2023-07-07 | 同济大学 | Congestion traffic flow traceability analysis method |
CN110942642A (en) * | 2019-11-20 | 2020-03-31 | 中科视元科技(杭州)有限公司 | Video-based traffic slow-driving detection method and system |
CN110942642B (en) * | 2019-11-20 | 2021-01-19 | 中科视元科技(杭州)有限公司 | Video-based traffic slow-driving detection method and system |
CN110992693B (en) * | 2019-12-04 | 2021-08-24 | 浙江工业大学 | Deep learning-based traffic congestion degree multi-dimensional analysis method |
CN110992693A (en) * | 2019-12-04 | 2020-04-10 | 浙江工业大学 | Deep learning-based traffic congestion degree multi-dimensional analysis method |
CN111292353A (en) * | 2020-01-21 | 2020-06-16 | 成都恒创新星科技有限公司 | Parking state change identification method |
CN111292353B (en) * | 2020-01-21 | 2023-12-19 | 成都恒创新星科技有限公司 | Parking state change identification method |
CN111723747A (en) * | 2020-06-22 | 2020-09-29 | 西安工业大学 | Lightweight high-efficiency target detection method applied to embedded platform |
CN112053572A (en) * | 2020-09-07 | 2020-12-08 | 重庆同枥信息技术有限公司 | Vehicle speed measuring method, device and system based on video and distance grid calibration |
CN112163471A (en) * | 2020-09-14 | 2021-01-01 | 深圳市诺龙技术股份有限公司 | Congestion detection method and device |
CN112132071A (en) * | 2020-09-27 | 2020-12-25 | 上海眼控科技股份有限公司 | Processing method, device and equipment for identifying traffic jam and storage medium |
CN112232257B (en) * | 2020-10-26 | 2023-08-11 | 青岛海信网络科技股份有限公司 | Traffic abnormality determination method, device, equipment and medium |
CN112232257A (en) * | 2020-10-26 | 2021-01-15 | 青岛海信网络科技股份有限公司 | Traffic abnormity determining method, device, equipment and medium |
CN112444311A (en) * | 2020-11-22 | 2021-03-05 | 同济大学 | Method for monitoring space-time load of bridge vehicle |
CN113850995A (en) * | 2021-09-14 | 2021-12-28 | 华设设计集团股份有限公司 | Event detection method, device and system based on tunnel radar vision data fusion |
Also Published As
Publication number | Publication date |
---|---|
CN109087510B (en) | 2021-09-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109087510A (en) | traffic monitoring method and device | |
CN107527009B (en) | Remnant detection method based on YOLO target detection | |
Kim et al. | Vision-based nonintrusive context documentation for earthmoving productivity simulation | |
CN107123131B (en) | Moving target detection method based on deep learning | |
CN111784685A (en) | Power transmission line defect image identification method based on cloud edge cooperative detection | |
CN103093212B (en) | The method and apparatus of facial image is intercepted based on Face detection and tracking | |
CN105975929A (en) | Fast pedestrian detection method based on aggregated channel features | |
CN109871730A (en) | A kind of target identification method, device and monitoring device | |
CN107507417B (en) | A kind of smartway partitioning method and device based on microwave radar echo-signal | |
CN108831161A (en) | A kind of traffic flow monitoring method, intelligence system and data set based on unmanned plane | |
CN110765865B (en) | Underwater target detection method based on improved YOLO algorithm | |
CN104346802A (en) | Method and device for monitoring off-job behaviors of personnel | |
CN112613569B (en) | Image recognition method, training method and device for image classification model | |
CN113112480B (en) | Video scene change detection method, storage medium and electronic device | |
CN109685045A (en) | A kind of Moving Targets Based on Video Streams tracking and system | |
CN106778633B (en) | Pedestrian identification method based on region segmentation | |
CN110533955A (en) | A kind of method, terminal device and the computer readable storage medium on determining parking stall | |
CN113887605A (en) | Shape-adaptive rotating target detection method, system, medium, and computing device | |
CN108229473A (en) | Vehicle annual inspection label detection method and device | |
CN111476160A (en) | Loss function optimization method, model training method, target detection method, and medium | |
CN107481260A (en) | A kind of region crowd is detained detection method, device and storage medium | |
CN109684986A (en) | A kind of vehicle analysis method and system based on automobile detecting following | |
CN111524121A (en) | Road and bridge fault automatic detection method based on machine vision technology | |
CN106611165B (en) | A kind of automotive window detection method and device based on correlation filtering and color-match | |
CN114494845A (en) | Artificial intelligence hidden danger troubleshooting system and method for construction project site |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |