CN108062861A - A kind of intelligent traffic monitoring system - Google Patents
A kind of intelligent traffic monitoring system Download PDFInfo
- Publication number
- CN108062861A CN108062861A CN201711485166.7A CN201711485166A CN108062861A CN 108062861 A CN108062861 A CN 108062861A CN 201711485166 A CN201711485166 A CN 201711485166A CN 108062861 A CN108062861 A CN 108062861A
- Authority
- CN
- China
- Prior art keywords
- mrow
- dynamic vehicle
- position coordinates
- video image
- center position
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/017—Detecting movement of traffic to be counted or controlled identifying vehicles
- G08G1/0175—Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/24323—Tree-organised classifiers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/248—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/584—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/259—Fusion by voting
Abstract
The invention discloses a kind of intelligent traffic monitoring system, including camera, wireless transport module and rear end monitoring service platform;The camera monitors the video image in section for gathering;The wireless transport module is used to for the video image collected to be sent to the rear end monitoring service platform;The rear end monitoring service platform is used to that the dynamic vehicle in video image to be detected and tracked.The invention is realized and dynamic vehicle is tracked, complete being optimized to traffic behavior for task by being handled the real-time road condition information that traffic surveillance and control system gathers and data analysis, and effective support is provided to alleviate the traffic problems such as traffic jam, conevying efficiency be low.
Description
Technical field
The invention belongs to field of video monitoring more particularly to a kind of intelligent traffic monitoring systems.
Background technology
In recent years, being continuously increased with transport need and number of vehicles, traffic system is increasingly complicated.Domestic big and medium-sized cities
Generally existing traffic jam, the problem of conevying efficiency is low.A feasible way for solving urban transport problems is exactly to introduce to have
Effect and rational administrative skill establish the intelligent traffic monitoring system of practicality and high efficiency.Intelligent traffic monitoring system is by the friendship of acquisition
Communication breath is analyzed, and is drawn effective command means, by the regulation and control to hardware facility in traffic system, is completed to traffic shape
The task that state optimizes.It can be seen that how research fast and accurately obtains traffic information and is of great significance.It obtains and hands over
One big important topic of communication breath is exactly to solve in dynamic video, to the tracing problem of vehicle.
The content of the invention
In view of the above-mentioned problems, the present invention provides a kind of intelligent traffic monitoring system.
The purpose of the present invention is realized using following technical scheme:
A kind of intelligent traffic monitoring system, including camera, wireless transport module and rear end monitoring service platform;
Camera monitors the video image in section for gathering;
Wireless transport module is used to the video image collected being sent to rear end monitoring service platform;
Rear end monitoring service platform is used to that the dynamic vehicle in video image to be detected and tracked.
Beneficial effects of the present invention are:A kind of intelligent traffic monitoring system proposed by the present invention, by traffic monitoring system
The real-time road condition information of system acquisition is handled and data analysis, realizes dynamic vehicle tracking, completes to carry out traffic behavior excellent
The task of change provides effective support to alleviate the traffic problems such as traffic jam, conevying efficiency be low.
Description of the drawings
Using attached drawing, the invention will be further described, but the embodiment in attached drawing does not form any limit to the present invention
System, for those of ordinary skill in the art, without creative efforts, can also obtain according to the following drawings
Other attached drawings.
Fig. 1 is the principle of the present invention figure;
Fig. 2 is the frame construction drawing of monitoring service platform in rear end of the present invention;
Fig. 3 is the frame construction drawing of track and localization submodule of the present invention.
Reference numeral:Camera 1;Wireless transport module 2;Rear end monitoring service platform 3;Target Acquisition submodule 31;Just
Beginning beggar's module 32;Track and localization submodule 33;External appearance characteristic assessment unit 331;Motion feature assessment unit 332;Positioning is single
Member 333;Target scale updates and selecting unit 334.
Specific embodiment
The invention will be further described with the following Examples.
Referring to Fig. 1, a kind of intelligent traffic monitoring system, including camera 1, wireless transport module 2 and rear end monitoring service
Platform 3;
Camera 1 monitors the video image in section for gathering;
Wireless transport module 2 is used to the video image collected being sent to rear end monitoring service platform 3;
Rear end monitoring service platform 3 is used to that the dynamic vehicle in video image to be detected and tracked.
Preferably, referring to Fig. 2, rear end monitoring service platform 3 includes Target Acquisition submodule 31,32 and of initialization submodule
Track and localization submodule 33;Target Acquisition submodule 31 is used as starting two field picture for choosing a two field picture from video image,
And handmarking goes out dynamic vehicle in initial two field picture;Initialization submodule 32 is used to carry out initialization behaviour to starting two field picture
Make;Track and localization submodule 33 is detected dynamic vehicle and tracks according to the initialization result of initialization submodule 32.
Initialization operation is carried out to starting two field picture to specifically include:
(1) a certain number of topography's blocks of stochastical sampling in the target area from starting two field picture where dynamic vehicle
As Positive training sample, a certain number of topography's blocks of stochastical sampling are as negative instruction out of close-proximity target zone background area
Practice sample;
The characteristics of image of extraction starting two field picture, the characteristics of image include from Positive training sample and negative training sample:Just
Color characteristic, Gradient Features and its spatial deviation information compared with dynamic vehicle center of training sample and negative training sample
This color characteristic and Gradient Features;
The Hough forest detector that one decision tree is I is built according to obtained characteristics of image;
(2) a certain number of topography's blocks of stochastical sampling in the target area from starting two field picture where dynamic vehicle
Random initializtion is carried out as optical flow tracking block, and to the initial position of optical flow tracking block.
Preferably, referring to Fig. 3, track and localization submodule 33 includes external appearance characteristic assessment unit 331, motion feature assessment list
Member 332, positioning unit 333 and target scale update and selecting unit 334;
When the video image of t moment arrives, external appearance characteristic assessment unit 331 is used for after by Hough forest detector
T moment video image in dynamic vehicle center position coordinates carry out Hough ballot, obtained according to the accumulated value of voting results
To external appearance characteristic confidence value;
Motion feature assessment unit 332 be used for according to dynamic vehicle Space-time domain movable information, to optical flow tracking block into
Row is handled, the motion feature confidence value of the center position coordinates of dynamic vehicle in video image when exporting t moment;
When positioning unit 333 is used for according to the external appearance characteristic confidence value and motion feature confidence value of acquisition to t moment
Video image in the center position coordinates of dynamic vehicle estimated, obtain the center estimated coordinates of dynamic vehicle;
Target scale updates and selecting unit 334 is used for the estimated coordinates obtained according to positioning unit 333, realizes to dynamic
Positive training sample and negative training sample are chosen in the scale update of vehicle region again from the video image of updated t moment simultaneously
Originally with optical flow tracking block, Hough forest detector is updated according to the Positive training sample and negative training sample chosen again.
Preferably, Hough ballot is carried out to the center position coordinates of dynamic vehicle, is worth to according to the accumulation of voting results
External appearance characteristic confidence value, specifically includes:
(1) video image during loading t moment, a certain number of topography's blocks of stochastical sampling from the video image,
And by each topography's block by Hough forest detector, the decision tree in Hough forest detector judges that topography's block is
It is no to belong to dynamic vehicle, when decision tree judges that topography's block belongs to dynamic vehicle, then to dynamic vehicle center position coordinates
Hough ballot is carried out, and cumulative voting is as a result, wherein to the cumulative voting value calculation formula at video image coordinate (m, n) such as
Under:
In formula,Accumulation for dynamic vehicle center position coordinates is supported to be located at coordinate (m, n) during t moment is thrown
Ticket value, A are area-of-interest, and area-of-interest refers to target area and its background area of close-proximity target zone, and I represents Hough
The sum of decision tree, L in forest detectori(s ', r ') expression centre coordinate is that topography's block of (s ', r ') passes through i-th
The leaf node that decision-tree model is reached, p (m, n) | Li(s ', r ')) represent centre coordinate be (s ', r ') topography's block
Under conditions of i-th of decision-tree model, dynamic vehicle center position coordinates are located at the probability at (m, n);
(2) calculate institute it is possible that for dynamic vehicle center position coordinates cumulative voting value, using following formula calculating it is all can
It can be the external appearance characteristic confidence value of dynamic vehicle center position coordinates point:
Wherein,For t moment when dynamic vehicle center position coordinates be located at external appearance characteristic confidence level at (m, n)
Value,It is located at the cumulative voting value at (m, n) for t moment dynamic vehicle center position coordinates, A is area-of-interest,For it is possible that be dynamic vehicle center position coordinates point cumulative voting value form set.
Advantageous effect:External appearance characteristic assessment unit 331 is realized based on Hough forest model, according to last moment video
The Hough forest detector that frame is trained tires out the center position coordinates of the dynamic vehicle in subsequent time video image
Product ballot, the way reduce the influence of target scale variation or attitudes vibration to appearance feature reliability, improve to mesh
The accuracy of tracking is marked, is conducive to subsequent dynamic vehicle and is accurately positioned.
Preferably, optical flow tracking block is handled, the center of dynamic vehicle in video image when exporting t moment
The motion feature confidence value of coordinate, specifically includes:
(1) according to the optical flow tracking block obtained from the initialization submodule 32, Lucas-Kanade optical flow algorithms are utilized
The center of each optical flow tracking block optical flow tracking block in t moment is obtained, to-backward light stream before being filtered out using medium filtering
The larger optical flow tracking block of error, obtains the set of effective optical flow tracking block center during t moment
With effective optical flow tracking block compared with the side-play amount set at dynamic vehicle centerWherein, k is kth
A effective optical flow tracking block, M be effective optical flow tracking block number, (νk t,ωk t) be t moment when k-th of effective optical flow tracking
Block center position coordinates;dνkRepresent the offset of the horizontal direction of k-th of effective optical flow tracking block center position coordinates, d ωkTable
Show the offset of the vertical direction of k-th of effective optical flow tracking block center position coordinates;
(2) the Optical-flow Feature accumulation that dynamic vehicle center position coordinates are located at (m, n) when calculating t moment using following formula is thrown
Ticket value:
Wherein,It accumulates and throws for the Optical-flow Feature that the center of t moment dynamic vehicle is located at coordinate (m, n)
Ticket value;θkFor the weight of k-th of effective optical flow tracking block, M is the number of effective optical flow tracking block;(νk,ωk) effective for k-th
The center position coordinates of optical flow tracking block;(dνk,dωk) for k-th effective light stream block center compared in dynamic vehicle
The offset of heart position coordinates;σ2For a constant parameter, and σ2=4;λ be a constant parameter, w be target area width, h
For the height of target area;
4) using following formula, when obtaining t moment, the center of dynamic vehicle is located at the motion feature confidence level at (m, n)
Value:
In formula,For t when dynamic vehicle center be located at motion feature confidence value at (m, n),Represent that the center of dynamic vehicle during t moment is located at the Optical-flow Feature cumulative voting value at coordinate (m, n), A is sense
Interest region,It is possible that being that Optical-flow Feature cumulative voting value at dynamic vehicle center position coordinates point is formed
Set.
Advantageous effect:Dynamic vehicle is described in Space-time domain movable information, the way by motion feature assessment unit 332
Middle θkThe spatial positional information for taking full advantage of topography's block constrains the center of dynamic vehicle, i.e., so that close
The weight bigger of topography's block of the center of moving target effectively reduces topography's block pair from background area
The center estimation adverse effect of dynamic vehicle.Simultaneously by calculating each effectively optical flow tracking block to dynamic
The relative weight of the center of vehicle, and add up, motion feature confidence value is finally obtained, can not only be reflected
Space-time relationship of the dynamic vehicle between video frame, while solve due to target scale variation or during target carriage change
Target orientation problem so that it is subsequently more precisely reliable to the positioning of moving target.
Preferably, the center position coordinates of dynamic vehicle are estimated, specifically includes:
(1) following formula is utilized, calculates the fuzzy synthesis confidence value of t moment,
In formula,For t moment, dynamic vehicle center is located at the fuzzy synthesis confidence level at coordinate (m, n)
Value,For t moment, dynamic vehicle center is located at the external appearance characteristic confidence value at (m, n),For t when
During quarter, dynamic vehicle center is located at the motion feature confidence value at (m, n), and κ is weight factor;
(2) according to step (1), estimate the center position coordinates of dynamic vehicle in the video image of t moment, obtain t moment
When dynamic vehicle center estimated coordinates
Advantageous effect:Fuzzy synthesis confidence value is calculated using fuzzy synthesis confidence value calculation formula, which causes
Fuzzy synthesis confidence level figure becomes more sharp, reduces the uncertainty when being positioned, not only increases target following
Success rate, while also effectively increase the accuracy of tracking.
Preferably, target scale update and selecting unit 334 are used for the estimated coordinates obtained according to positioning unit 333, real
Now the update of the scale in dynamic vehicle region is chosen again from the video image of updated t moment simultaneously Positive training sample and
Negative training sample and optical flow tracking block, according to the Positive training sample and negative training sample chosen again to Hough forest detector into
Row update, specifically includes:
(1) estimated coordinates of dynamic vehicle center during the t moment obtained according to positioning unit 333Determine t
The set that effective optical flow tracking block centre coordinate during the moment in video image is formed:
Wherein, ε is constant parameter, (s 'k,r′k) for the centre coordinate of k-th of effective optical flow tracking block t moment in set C,For the estimated coordinates of t moment dynamic vehicle centre coordinate, G is that the estimation of dynamic vehicle center when supporting t moment is sat
Mark isEffective optical flow tracking block centre coordinate form set;‖■‖2For vector length, C be t moment when it is effective
The set that optical flow tracking block center position coordinates are formed, effective optical flow tracking block is compared with dynamic vehicle center when E is t moment
The set that offset is formed;
(2) according to obtained set G, the estimated target scale of k-th of effective optical flow tracking block is calculated using following formula
Change rate;
Wherein, vkFor the estimated target scale change rate of k-th of effective optical flow tracking block, ‖ dsk,drk‖2Represent kth
A effective optical flow tracking block is compared with the length of the offset vector of target's center, (s 'k,r′k) be t moment when k-th of effective light stream
Track block centre coordinate, (s 'k,r′k) ∈ G,For t moment when dynamic vehicle center position coordinates estimated coordinates.
(3) in set of computations G all effective optical flow tracking blocks target scale change rate, obtain target scale change rate
Gather { vk, according to obtained set { vk, utilize the estimate of following formula calculating t moment target area scale:
Wherein,For the estimate of t moment target area scale,For the estimation of (t-1) moment target area scale
Value, M are the number of effective optical flow tracking block, and η is a constant, takes 0 < η < 1, vkEstimated by k-th of effective optical flow tracking block
The target scale change rate counted out;
(4) estimate of the target area scale obtained according to step (3), is updated the target area of present frame;
(5) if the fuzzy synthesis confidence value F of dynamic vehicle center position coordinatesw> μ1, wherein μ1For constant, and 0 <
μ1< 1, then respectively out of current updated target area and several topographies of stochastical sampling of close-proximity target zone region
Block is as new Hough forest training sample;Several topography's blocks of stochastical sampling are as new light stream out of current goal region
Track block;When subsequent time video image arrives, re -training Hough forest detector repeats above-mentioned steps, and then
Realize the tracing detection to dynamic vehicle.
Advantageous effect:When being updated to target area scale, by definition set G, the value of ε can control support
The size of effective optical flow tracking block of the center estimation of current dynamic vehicle is conducive to estimate the accurate of target area scale
When counting, while target area scale being estimated, it is contemplated that the estimate of t-1 moment target areas scale, which can
Influence of the noise in target scale change rate estimated during t moment to target area size estimation is effectively inhibited, is improved
The accuracy of target area size estimation.
Finally it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention rather than the present invention is protected
The limitation of scope is protected, although being explained in detail with reference to preferred embodiment to the present invention, those of ordinary skill in the art should
Work as understanding, technical scheme can be modified or replaced equivalently, without departing from the reality of technical solution of the present invention
Matter and scope.
Claims (5)
1. a kind of intelligent traffic monitoring system, it is characterized in that, it is put down including camera, wireless transport module and rear end monitoring service
Platform;
The camera monitors the video image in section for gathering;
The wireless transport module is used to for the video image collected to be sent to the rear end monitoring service platform;
The rear end monitoring service platform is used to that dynamic vehicle in video image to be detected and tracked.
2. a kind of intelligent traffic monitoring system according to claim 1, it is characterized in that, the rear end monitoring service platform bag
Include Target Acquisition submodule, initialization submodule and track and localization submodule;The Target Acquisition submodule is used for from video figure
A two field picture is chosen as in as starting two field picture, and handmarking goes out dynamic vehicle in initial two field picture;The initial beggar
Module is used to carry out initialization operation to starting two field picture;The track and localization submodule is according to the first of the initialization submodule
Beginningization is as a result, being detected dynamic vehicle and tracking.
3. a kind of intelligent traffic monitoring system according to claim 2, it is characterized in that, described pair of starting two field picture carries out just
Beginningization operation specifically includes:
(1) a certain number of topography's block conducts of stochastical sampling in the target area from starting two field picture where dynamic vehicle
Positive training sample, a certain number of topography's blocks of stochastical sampling are as negative training sample out of close-proximity target zone background area
This;
(2) characteristics of image of starting two field picture is extracted from Positive training sample and negative training sample, which includes:Positive instruction
Practice color characteristic, Gradient Features and its spatial deviation information and negative training sample compared with dynamic vehicle center of sample
Color characteristic and Gradient Features;The Hough forest detector that one decision tree is I is built according to obtained characteristics of image;
(3) at random using a certain number of topography's blocks as optical flow tracking block out of target area, and to optical flow tracking block
Initial position carry out random initializtion.
4. a kind of intelligent traffic monitoring system according to claim 3, it is characterized in that, the track and localization submodule includes
External appearance characteristic assessment unit, motion feature assessment unit, positioning unit and target scale update and selecting unit;
The external appearance characteristic assessment unit is used for dynamic vehicle in the video image by the t moment after Hough forest detector
Center position coordinates carry out Hough ballot, and external appearance characteristic confidence value is obtained according to voting results;
The motion feature assessment unit is used for the movable information in Space-time domain according to dynamic vehicle, at optical flow tracking block
Reason exports the motion feature confidence value of dynamic vehicle center position coordinates in the video image of t moment;
The positioning unit is used for external appearance characteristic confidence value and motion feature confidence value according to acquisition to being regarded during t moment
Dynamic vehicle center position coordinates are estimated in frequency image, obtain the estimated coordinates of dynamic vehicle center;
The target scale update and selecting unit are used for the estimated coordinates obtained according to positioning unit, realize to target area
Scale update simultaneously chosen again from the video image of updated t moment Positive training sample, negative training sample and light stream with
Track block is updated Hough forest detector according to the Positive training sample and negative training sample chosen again.
5. a kind of intelligent traffic monitoring system according to claim 4, it is characterized in that, it is described to dynamic vehicle center
Coordinate carries out Hough ballot, obtains external appearance characteristic confidence value according to voting results, specifically includes:
(1) video image during loading t moment, a certain number of topography's blocks of stochastical sampling from the video image, and
By each topography's block by Hough forest detector, whether the decision tree in Hough forest detector judges topography's block
Belong to dynamic vehicle, when decision tree judges that topography block belongs to dynamic vehicle, then to dynamic vehicle center position coordinates into
Row Hough is voted, and cumulative voting is as a result, wherein as follows to the cumulative voting value calculation formula at video image coordinate (m, n):
<mrow>
<msubsup>
<mi>V</mi>
<mi>a</mi>
<mi>t</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>m</mi>
<mo>,</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<munder>
<mo>&Sigma;</mo>
<mrow>
<mo>(</mo>
<msup>
<mi>s</mi>
<mo>&prime;</mo>
</msup>
<mo>,</mo>
<msup>
<mi>r</mi>
<mo>&prime;</mo>
</msup>
<mo>)</mo>
<mo>&Element;</mo>
<mi>A</mi>
</mrow>
</munder>
<mfrac>
<mn>1</mn>
<mi>I</mi>
</mfrac>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>I</mi>
</munderover>
<mi>p</mi>
<mrow>
<mo>(</mo>
<mo>(</mo>
<mrow>
<mi>m</mi>
<mo>,</mo>
<mi>n</mi>
</mrow>
<mo>)</mo>
<mo>|</mo>
<msub>
<mi>L</mi>
<mi>i</mi>
</msub>
<mo>(</mo>
<mrow>
<msup>
<mi>s</mi>
<mo>&prime;</mo>
</msup>
<mo>,</mo>
<msup>
<mi>r</mi>
<mo>&prime;</mo>
</msup>
</mrow>
<mo>)</mo>
<mo>)</mo>
</mrow>
</mrow>
In formula,During t moment dynamic vehicle center position coordinates to be supported to be located at the cumulative voting value at coordinate (m, n),
A is area-of-interest, and area-of-interest refers to target area and its background area of close-proximity target zone, and I represents Hough forest
The sum of decision tree, L in detectori(s ', r ') expression centre coordinate is that topography's block of (s ', r ') passes through i-th of decision-making
The leaf node that tree-model is reached, p (m, n) | Li(s ', r ')) to represent centre coordinate be that the topography block of (s ', r ') passes through
Under conditions of i-th of decision-tree model, dynamic vehicle center position coordinates are located at the probability at (m, n);
(2) calculate institute it is possible that for dynamic vehicle center position coordinates cumulative voting value, using following formula calculate it is possible that being
The external appearance characteristic confidence value of dynamic vehicle center position coordinates point:
<mrow>
<msubsup>
<mi>F</mi>
<mi>a</mi>
<mi>t</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>m</mi>
<mo>,</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<msup>
<mi>e</mi>
<mrow>
<msubsup>
<mi>V</mi>
<mi>a</mi>
<mi>t</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>m</mi>
<mo>,</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
</mrow>
</msup>
<msup>
<mi>e</mi>
<mrow>
<msubsup>
<mi>V</mi>
<mi>a</mi>
<mi>t</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>m</mi>
<mo>,</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<mi>max</mi>
<msubsup>
<mrow>
<mo>{</mo>
<msubsup>
<mi>V</mi>
<mi>a</mi>
<mi>t</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>m</mi>
<mo>,</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mo>}</mo>
</mrow>
<mi>A</mi>
<mrow>
<mi>t</mi>
<mo>=</mo>
<mi>l</mi>
</mrow>
</msubsup>
</mrow>
</msup>
</mfrac>
</mrow>
Wherein,For t moment when dynamic vehicle center position coordinates be located at external appearance characteristic confidence value at (m, n),It is located at the cumulative voting value at (m, n) for t moment dynamic vehicle center position coordinates, A is area-of-interest,For it is possible that being the set that cumulative voting value at dynamic vehicle center position coordinates point is formed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711485166.7A CN108062861B (en) | 2017-12-29 | 2017-12-29 | Intelligent traffic monitoring system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711485166.7A CN108062861B (en) | 2017-12-29 | 2017-12-29 | Intelligent traffic monitoring system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108062861A true CN108062861A (en) | 2018-05-22 |
CN108062861B CN108062861B (en) | 2021-01-15 |
Family
ID=62140952
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711485166.7A Active CN108062861B (en) | 2017-12-29 | 2017-12-29 | Intelligent traffic monitoring system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108062861B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114677774A (en) * | 2022-03-30 | 2022-06-28 | 深圳市捷顺科技实业股份有限公司 | Barrier gate control method and related equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102831618A (en) * | 2012-07-20 | 2012-12-19 | 西安电子科技大学 | Hough forest-based video target tracking method |
CN103345840A (en) * | 2013-05-28 | 2013-10-09 | 南京正保通信网络技术有限公司 | Video detection method of road crossing event at cross road |
WO2013182298A1 (en) * | 2012-06-08 | 2013-12-12 | Eth Zurich | Method for annotating images |
CN103593679A (en) * | 2012-08-16 | 2014-02-19 | 北京大学深圳研究生院 | Visual human-hand tracking method based on online machine learning |
CN104112282A (en) * | 2014-07-14 | 2014-10-22 | 华中科技大学 | A method for tracking a plurality of moving objects in a monitor video based on on-line study |
-
2017
- 2017-12-29 CN CN201711485166.7A patent/CN108062861B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013182298A1 (en) * | 2012-06-08 | 2013-12-12 | Eth Zurich | Method for annotating images |
CN102831618A (en) * | 2012-07-20 | 2012-12-19 | 西安电子科技大学 | Hough forest-based video target tracking method |
CN103593679A (en) * | 2012-08-16 | 2014-02-19 | 北京大学深圳研究生院 | Visual human-hand tracking method based on online machine learning |
CN103345840A (en) * | 2013-05-28 | 2013-10-09 | 南京正保通信网络技术有限公司 | Video detection method of road crossing event at cross road |
CN104112282A (en) * | 2014-07-14 | 2014-10-22 | 华中科技大学 | A method for tracking a plurality of moving objects in a monitor video based on on-line study |
Non-Patent Citations (6)
Title |
---|
JUN XIANG, NONG SANG, JIANHUA HOU, RUI HUANG, AND CHANGXIN GAO: "Hough Forest-based Association Framework with Occlusion Handling for Multi-Target Tracking", 《IEEE SIGNAL PROCESSING LETTERS》 * |
KENSHO HARA ,TAKATSUGU HIRAYAMA,KENJI MASE: "Trend-sensitive hough forests for action detection", 《 2014 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING》 * |
尤玮: "基于多特征与改进霍夫森林的行人检测方法", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
张文婷: "交通环境下基于改进霍夫森林的目标检测与跟踪", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
梁付新,刘洪彬,常发亮: "多特征融合匹配的霍夫森林多目标跟踪", 《西安电子科技大学学报(自然科学版)》 * |
高庆吉,霍璐,牛国臣: "基于改进霍夫森林框架的多目标跟踪算法", 《计算机应用》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114677774A (en) * | 2022-03-30 | 2022-06-28 | 深圳市捷顺科技实业股份有限公司 | Barrier gate control method and related equipment |
CN114677774B (en) * | 2022-03-30 | 2023-10-17 | 深圳市捷顺科技实业股份有限公司 | Barrier gate control method and related equipment |
Also Published As
Publication number | Publication date |
---|---|
CN108062861B (en) | 2021-01-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104200657B (en) | A kind of traffic flow parameter acquisition method based on video and sensor | |
CN103268616B (en) | The moveable robot movement human body tracing method of multi-feature multi-sensor | |
CN106781479B (en) | A method of highway operating status is obtained based on mobile phone signaling data in real time | |
CN110472496A (en) | A kind of traffic video intelligent analysis method based on object detecting and tracking | |
CN110008867A (en) | A kind of method for early warning based on personage's abnormal behaviour, device and storage medium | |
CN107798870B (en) | A kind of the track management method and system, vehicle of more vehicle target tracking | |
CN106056100A (en) | Vehicle auxiliary positioning method based on lane detection and object tracking | |
CN100573618C (en) | A kind of traffic intersection four-phase vehicle flow detection method | |
CN108364466A (en) | A kind of statistical method of traffic flow based on unmanned plane traffic video | |
CN106529493A (en) | Robust multi-lane line detection method based on perspective drawing | |
CN104680559B (en) | The indoor pedestrian tracting method of various visual angles based on motor behavior pattern | |
CN110110649A (en) | Alternative method for detecting human face based on directional velocity | |
CN105869178A (en) | Method for unsupervised segmentation of complex targets from dynamic scene based on multi-scale combination feature convex optimization | |
CN107397658B (en) | Multi-scale full-convolution network and visual blind guiding method and device | |
CN107014375B (en) | Indoor positioning system and method with ultra-low deployment | |
CN103425764B (en) | Vehicle matching method based on videos | |
CN103854292A (en) | Method and device for calculating number of people and population motion direction | |
CN107339992A (en) | A kind of method of the semantic mark of the indoor positioning and terrestrial reference of Behavior-based control | |
CN103593679A (en) | Visual human-hand tracking method based on online machine learning | |
CN109668563A (en) | Processing method and processing device based on indoor track | |
CN103646454A (en) | Parking lot management system and method | |
CN101493943A (en) | Particle filtering tracking method and tracking device | |
CN102930524A (en) | Method for detecting heads based on vertically-placed depth cameras | |
CN107230219A (en) | A kind of target person in monocular robot is found and follower method | |
CN109636828A (en) | Object tracking methods and device based on video image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20201230 Address after: 102600 1304, 13th floor, building 13, yard 25, Xinyuan street, Daxing District, Beijing Applicant after: Beijing anzida Technology Co.,Ltd. Address before: 234000 Caocan Town, Yongqiao District, Suzhou City, Anhui Province Applicant before: Pan Yanling |
|
GR01 | Patent grant | ||
GR01 | Patent grant |