CN101908236A - Public traffice passenger flow statistical method - Google Patents

Public traffice passenger flow statistical method Download PDF

Info

Publication number
CN101908236A
CN101908236A CN 201010195212 CN201010195212A CN101908236A CN 101908236 A CN101908236 A CN 101908236A CN 201010195212 CN201010195212 CN 201010195212 CN 201010195212 A CN201010195212 A CN 201010195212A CN 101908236 A CN101908236 A CN 101908236A
Authority
CN
China
Prior art keywords
head
zone
threshold value
counting
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201010195212
Other languages
Chinese (zh)
Other versions
CN101908236B (en
Inventor
徐建华
陈晓蓉
戴曙光
穆平安
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN2010101952121A priority Critical patent/CN101908236B/en
Publication of CN101908236A publication Critical patent/CN101908236A/en
Application granted granted Critical
Publication of CN101908236B publication Critical patent/CN101908236B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a public traffice passenger flow statistical method based on a multi-threshold region growing and optical flow method. Aiming at the special environment of a bus, a head part is used as a counting target, two thresholds are dynamically adopted, the head part is separated from the background through two region growing algorithms and characteristic extraction, a growing threshold is determined through a variance of an image, and a piecewise function relation is formed between the growing threshold and the variance of the image. For other special conditions such as hat-wearing and the like, a moving human body is used as a counting target by using the optical flow method. In the image, a getting-on counting line and a getting-off counting line are manually designated, when the tracked target passes through the counting lines, the distance moved in the directions of the counting lines from coordinates detected for the first time and coordinates at the moment is calculated, if the distance is more than the pre-determined threshold, the counting is generated; or else the counting is not carried out, the influence of a disturbing object to the counting is effectively reduced.

Description

Public traffice passenger flow statistical method
Technical field
The present invention relates to the moving object detection and tracking technology, particularly a kind of public traffice passenger flow statistical method based on many threshold region growths and optical flow method.
Background technology
For moving object detection, classic method mainly contains background subtraction method, frame difference method and optical flow method.It is little or change environment comparatively slowly that the background method is mainly used in change of background, before target enters the video camera scope, extracts Background, after target enters, with present image subtracting background figure, obtains moving target.Context update is the emphasis of background method, mainly contains single gaussian sum mixed Gaussian background model method at present.Because each platform environment is different, and it is very big influenced by light and blocking on the bus, so be unsuitable for adopting the background subtraction method.Though frame difference method is influenced not quite by light, in crowded, a plurality of passengers are sticked together easily, and how independent passenger being split is a difficult problem.Even consider that passenger's head also seldom is close together when crowding, so be easy to realize as the counting target with head.Have many scholars to adopt edge extracting and Hough to change at present and seek the number of people, this method calculated amount is big, and it is also bigger to take storage space.
Optical flow method has its advantage in detecting moving target: can not know to detect moving target under the situation of any scene information.Optical flow computation method based on gradient has obtained using widely at present.The problem that adopts optical flow method to carry out the moving object detection is that mainly optical flow method calculating is consuming time, real-time and practicality are all relatively poor, pure to adopt optical flow method to detect moving target not-so-practical, detects moving target but optical flow computation method and additive method can be combined.
Summary of the invention
For solving the problem that prior art exists, the present invention proposes a kind of public traffice passenger flow statistical method based on many threshold region growths and optical flow method, catch the passenger getting on/off image in real time by the camera that is installed on the bus, adopt this method to calculate the number of getting on or off the bus.
For solving the problems of the technologies described above, technical scheme of the present invention is:
A kind of public traffice passenger flow statistical method may further comprise the steps:
At first obtain two threshold values by twice big Tianjin threshold method, filtering can not be the zone of head to big threshold value as the value of roughly selecting, and less threshold value is as head average gray threshold value;
The variance of computed image is determined two growths of size threshold value G then mAnd G n
Adopt for the first time G mImage is carried out region growing, do not comprise the zone of head by area and average gray filtering;
To the zone that keeps, judge at first whether it comprises detected any one head center-of-mass coordinate in the previous frame, if comprise, be 1 then with step-length, the growth threshold value is from G nTo G mEach value region growing is carried out in this zone, and seek the zone the most similar to this head by normalization coupling;
To not comprising the zone of any one head barycenter in the previous frame, whether at first judge its circularity greater than the circularity threshold value, if greater than would directly think head zone and preserve;
Otherwise, is that the circle of R is made opening operation to it to this zone with radius, and R represents the average head size, to its average gray threshold value of region decision of obtaining whether less than head average gray threshold value, and circularity is greater than the circularity threshold value, if meet then think head and preserve.Otherwise get G nCarry out region growing once more, find the head zone of separating by feature extraction;
Preserve the initial horizontal coordinate of each head barycenter and the coordinate in present frame with an array;
Calculate the optical flow field of this image, obtain the moving region, each moving region and the detected head of region-growing method are done and computing, non-NULL is then deleted as a result, otherwise keeps;
Adopt the detected head of its algorithm to carry out the most contiguous coupling respectively with in the previous frame to detected head of region-growing method and the detected head of optical flow method, upgrade the position of each head;
Judge whether each head crosses counting line, if, calculate the distance that it is passed by on the counting line direction, if greater than threshold value then count, otherwise do not process.
After first time region growing, judge that whether every zone comprises the barycenter of certain head that the previous frame image detection arrives, if comprise, by a series of growth threshold value and normalization form fit, extracts the head zone the most similar with the previous frame number of people.
For special case such as being branded as, adopt optical flow method to detect movement human; Optical flow method and region growing algorithm synthesis use, respectively counting.
Artificial two counting lines of getting on or off the bus of delimiting when tracking target is crossed counting line, judge that distance that target passes by whether greater than pre-determined threshold value, counts greater than then producing, otherwise do not count on the counting line direction.
Compared with prior art, beneficial effect of the present invention can be:
Bus passenger flow statistics is significant, and the crowded degree by writing down each website and the ridership of each bar circuit can be the arrangement of bus station, and the scheduling of bus and the adjustment of public bus network provide abundant foundation and information.
The bus passenger flow statistic algorithm that the present invention is based on many threshold region growths and optical flow method is based on the region growing algorithm, and optical flow method is auxilliary, by the normalization form fit, improves and extracts precision.Whether to cross counting line and whether two condition judgment of displacement produce counting, effectively reduce the influence of disturbing counting.
Embodiment
The inventive method adopts the region growing algorithm that head is split from background, by feature extraction head detection is come out as the counting target again.The region growing algorithm adopts two threshold values, threshold value choose component of variance section funtcional relationship with image.This piecewise function is determined after the statistical study great amount of images.
After detecting a head, calculate the coordinate of its barycenter.This algorithm supposes that the center-of-mass coordinate of each its former frame of head is contained in its in the zone of next frame (frame per second of camera is set up substantially) more than 25 frames.So after finishing the region growing algorithm first time, judge at first whether every zone comprises the barycenter of detected certain head in the former frame, if comprise, then choose many these zones of a plurality of growth threshold values and carry out region growing, extract the head zone the most similar to the previous frame head by normalized form fit method.
Wait other special circumstances for being branded as, this algorithm adopts optical flow method as auxiliary counting, and after calculating the light stream image, passing threshold is chosen the bigger zone of light stream.Undertaken and computing by light stream zone and the detected head zone of region growing algorithm that each is extracted, avoid repeat count.
Its information is preserved with a three-dimensional array in the detected moving region of each head and optical flow method (following be called for short with moving target), suppose that counting line is a benchmark with the x coordinate, then first element of this array is exactly the barycenter horizontal ordinate of this moving target when detecting for the first time, and second and third element is the abscissa value and the ordinate value of the barycenter of moving target in present image.When target is crossed counting line, judge the distance that target is passed by on the counting line direction, if greater than threshold value then produce counting, otherwise do not count.
The flow process of this method comprises following step:
1. camera is caught the first frame gray level image, as the initial pictures of optical flow method.
2. calculate big Tianjin threshold value T1 of this image, to ask for its big Tianjin threshold value T2 once more less than the pixel of T1, as the average gray threshold value of head.
3. ask for pixel gray variance, obtain corresponding two growth threshold value G according to the piecewise function of variance and growth threshold value less than T1 mAnd G n
4. with G mFor the growth threshold value image is carried out region growing, it is suitable to choose area, average gray less than the zone of T1 as candidate's head zone.
5. following operation is done in each candidate region:
Whether at first judge its circularity greater than the circularity threshold value, if greater than the circularity threshold value then think that this zone is a head, should preserve in the zone.Otherwise with a radius is that the circle of R (being as the criterion with the head mean size that occurs in the image) carries out opening operation to it, to the zone that obtains, selects average gray less than T2, and circularity is preserved as head zone greater than the zone of threshold value.If still can't obtain a head zone, then the region growing second time is carried out in this zone, the growth threshold value is selected G n, be that the circle of R carries out opening operation to it with radius to the zone that obtains, select average gray less than T2 then, circularity is preserved as head zone greater than the zone of threshold value.
6. preserve the initial horizontal ordinate (with the x direction as counting direction) of each head zone and the center-of-mass coordinate in present frame with an array, promptly each head zone is preserved its information with three data.
7. camera is caught second two field picture, 2,3,4 it is handled set by step, to each zone that obtains, judges whether it comprises the barycenter of certain head in the previous frame, if comprise then use G nTo G m, step-length is that a series of growth threshold values of 1 are carried out region growing to this zone, adopts the normalization matching algorithm, seeks the zone the most similar to this head as position and the shape of this head in present frame.
8. to not comprising the zone of any head barycenter of previous frame, 5 operate set by step.
9. ask the difference image of second two field picture and first two field picture, obtain the moving region,, calculate the minimum boundary rectangle REC that comprises the moving region by opening operation filtering small movements zone.
With first two field picture as the optical flow field in the reference image calculation REC zone, select light stream greater than the pixel of threshold value as the motor point, and the calculating connected domain, select the circle of correct radial that the connected domain that obtains is carried out opening operation, purpose is the small connection that disconnects between two moving objects.
11. each optical flow field connected domain is carried out and computing with the head zone that adopts the region growing algorithm to obtain respectively.If be not empty, then should the zone have preserved and be used as head, otherwise deleted.
12. preserve the detected head position information of optical flow method with an array equally.Comprise the initial position horizontal ordinate and in the position of present frame.Replace first two field picture conduct reference picture of optical flow method next time with second two field picture.
13. head and the detected head of optical flow method that region-growing method is extracted carry out the most contiguous coupling with the head that each algorithm of previous frame obtains respectively.Upgrade the position (initial horizontal coordinate figure constant) of each head in present frame.
Whether cross counting line 14. judge each head, if, calculate the distance that it is passed by on the counting line direction, if greater than threshold value, then count, and with this head deletion.Otherwise do not handle.
15. camera continues photographic images, repeating step 2-13.
Below by concrete algorithm and formula the present invention is described in detail.
1. before the region growing algorithm image is analyzed carrying out, calculated the gray threshold of head, adopt big Tianjin threshold method.
The first step calculates the minimum and maximum gray-scale value of image, calculating mean value:
g ‾ = g max + g min 2
Second the step with
Figure BSA00000149195600052
As threshold value, image is divided into two parts, calculate the average gray value of two parts image respectively
Figure BSA00000149195600053
With
Figure BSA00000149195600054
Obtain
g ‾ ′ = g ‾ 1 + g ‾ 2 2 - - - ( 1 )
If
| g &OverBar; &prime; - g &OverBar; | < 0.05 - - - ( 2 )
Then
Figure BSA00000149195600057
Be the big Tianjin threshold value that obtains, otherwise order
g &OverBar; = g &OverBar; &prime;
Continuation with
Figure BSA00000149195600059
As threshold value, repeated for second step, set up up to (2) formula, obtain big Tianjin threshold value.
2. the gray variance S of computed image determines two region growing threshold value G by variance mAnd G n, only calculate the variance of gray scale here, because the head gray-scale value is generally less, so the information that the information pixel more higher than gray scale that gray scale provides than big Tianjin little pixel of threshold value provides is more accurate less than big Tianjin threshold pixels point.The funtcional relationship of growth threshold value and variance is as follows:
G m=7,G n=4,S>60
G m=5,G n=3,40≤S≤60
G m=4,G n=2,S<40
3. the realization of region growing algorithm.The region growing algorithm that this patent adopts is as follows: from the upper left corner of image, cover with the template of a 3*3 9 points the image upper left corner, the average gray g1 of zoning R1,3 pixels again move right template, the average gray g2 of calculation template overlay area R2, if
| g1-g2|<T (T is a threshold value)
Then R1 and R2 territory are merged, be designated as region R 1, calculate its average gray value g1 ', order
g1=g1′
Otherwise nonjoinder continues the rightmost of movable platen to image, adopts identical method to handle.Then template is moved back into the Far Left of image, moves down 3 pixels simultaneously, the average gray g3 of zoning R3, if
|g1-g3|<T
Then R1 and R3 are merged, otherwise nonjoinder.So circulation covers complete image up to template.
4. this method adopts normalized form fit to calculate the similarity in two zones, its computation process is as follows: the Hamming distance of at first calculating two zones, promptly calculate the number of two regional different pixels, suppose two region R egion1 and Region2, its Hamming distance is expressed as follows:
Distance=|Norm(Regions1)&~Regions2|+
|Regions2?&~Norm(Regions1)|
Wherein Norm (Region1) expression moves to Region1 and makes itself and Region2 have the position of identical barycenter.The expression calculating pixel number that takes absolute value , ﹠amp; Expression is carried out and operation ,~expression negate.Obtain after the Hamming distance in two zones, calculate the similarity in two zones, computing formula is:
Similarity=1-Distance/(|Norm(Regions1)|+|Regions2|)
5. this patent adopts the classical optical flow algorithm of improved Horn-Schunck, and this method is based on image gradient and the constant thought of grey scale pixel value summation.
For image f (x, y) (i, gray-scale value j) they are f (i in the t moment point, j t), supposes that light stream is u (i at the axial component of x, j), the axial component of y be v (i, j), after then passing through the Δ t time, (i j) moves to (i+u* Δ t to point, j+v* Δ t) locate, this gray-scale value is f (i+u* Δ t, j+v* Δ t, t+ Δ t), levels off in 0, can think at Δ t
f(i,j,t)=f(i+u*Δt,j+v*Δt,t+Δt)
Taylor expansion is carried out in this examination, and the higher order term of dividing out can obtain:
&PartialD; f &PartialD; x dx dt + &PartialD; f &PartialD; y dy dt + &PartialD; f &PartialD; t = 0 - - - ( 3 )
Order
&PartialD; f &PartialD; x = f x , &PartialD; f &PartialD; y = f y , &PartialD; f &PartialD; t = f t
Order
dx dt = u , dy dt = v
Then (3) formula can be rewritten as:
f xu+f yv+f t=0 (4)
Wherein
f x = 1 4 { f ( i + 1 , j , t ) - f ( i , j , t ) + f ( i + 1 , j + 1 , t ) - f ( i , j , t )
+ f ( i + 1 , j , t + 1 ) - f ( i , j , t + 1 ) + f ( i + 1 , j + 1 , t + 1 ) - f ( i , j + 1 , t + 1 )
f y = 1 4 { f ( i , j + 1 , t ) - f ( i , j , t ) + f ( i + 1 , j + 1 , t ) - f ( i + 1 , j , t )
+ f ( i , j + 1 , t + 1 ) - f ( i , j , t + 1 ) + f ( i + 1 , j + 1 , t + 1 ) - f ( i + 1 , j , t + 1 )
f t = 1 4 { f ( i , j , t + 1 ) - f ( i , j , t ) + f ( i + 1 , j + 1 , t + 1 ) - f ( i + 1 , j + 1 , t )
+ f ( i , j + 1 , t + 1 ) - f ( i , j + 1 , t ) + f ( i + 1 , j , t + 1 ) - f ( i + 1 , j , t )
U, v are light stream values to be asked, and an equation can't be obtained, and still the optical flow field that produces according to an object is uniformly level and smooth in its zone, and Horn-Schunck has proposed smooth restrictive condition:
E = &Integral; &Integral; ( f x u + f y v + f t ) 2 dxdy + &alpha; &Integral; &Integral; { ( &PartialD; u &PartialD; x ) 2 + ( &PartialD; u &PartialD; y ) 2 + ( &PartialD; v &PartialD; x ) 2 + ( &PartialD; v &PartialD; y ) 2 } dxdy - - - ( 4 )
Wherein α is the canonical coefficient, has represented the ratio of overall level and smooth property, and the size of α generally depends on the precision that image gradient calculates.U that calculates and v make the E minimum.So E is asked for the partial derivative of u and v, and making it respectively is 0, the iterative formula that obtains u and v is:
u ij n + 1 = u &OverBar; ij n - f x u &OverBar; ij n + f y v &OverBar; ij n + f t 1 &alpha; + f x 2 + f y 2 f x
v ij n + 1 = v &OverBar; ij n - f x u &OverBar; ij n + f y v &OverBar; ij n + f t 1 &alpha; + f x 2 + f y 2 f y
Wherein
Figure BSA00000149195600083
With
Figure BSA00000149195600084
Represent point (i, j) the average level light stream value of adjacent four pixels and vertical light flow valuve size respectively.Before iteration begins, promptly during n=0, make the light stream initial value be [0,0], continuous iteration then, when
| v ij n + 1 - v &OverBar; ij n | < T
And
| u ij n + 1 - u &OverBar; ij n | < T
T is prior preset threshold, generally gets between the 0.01-0.1, if can not allow iteration stopping, again by judging that number of iterations allows its termination, stops after falling generation 100 times.
The precision that optical flow field calculates depends on the precision that image gradient calculates to a great extent, so before carrying out optical flow computation, generally to just filtering and noise reduction processing of image.Such as passing through gaussian filtering.The limitation of optical flow method is that the computing time of its requirement is longer, so in handling in real time, how to improve its computing velocity, the not too big again optical flow computation precision that influences is the primary problem that solves.This method is accelerated the computing velocity of optical flow method by two improved methods.
(1) image is carried out the metric space conversion, reduce calculative pixel.The picture that shooting obtains supposes that size for M*N, becomes M/2*N/2 by change of scale with graphical rule, according to the optical flow method calculating principle, in an object area, its light stream size basically identical is not so this method can produce considerable influence to the optical flow computation precision.The optical flow field that obtains after the conversion returns M*N by neighbor interpolation method change of scale.
(2) select to carry out the zone of optical flow computation by frame difference method.What optical flow field calculated is the speed in the zone of motion, so static object and background are not needed to calculate.Frame difference method can detect the profile of moving object, thereby detects the moving region by filling scheduling algorithm.For because illumination or moving region that small items produced, by opening operation with its filtering.
The above is preferred embodiment of the present invention only, is not to be used for limiting practical range of the present invention.Have in the technical field under any and know the knowledgeable usually, without departing from the spirit and scope of the present invention, when can being used for a variety of modifications and variations, so protection scope of the present invention should be looked claims institute confining spectrum and is as the criterion.

Claims (4)

1. public traffice passenger flow statistical method is characterized in that may further comprise the steps:
At first obtain two threshold values by twice big Tianjin threshold method, filtering can not be the zone of head to big threshold value as the value of roughly selecting, and less threshold value is as head average gray threshold value;
The variance of computed image is determined two growths of size threshold value G then mAnd G n
Adopt for the first time G mImage is carried out region growing, do not comprise the zone of head by area and average gray filtering;
To the zone that keeps, judge at first whether it comprises detected any one head center-of-mass coordinate in the previous frame, if comprise, be 1 then with step-length, the growth threshold value is from G nTo G mEach value region growing is carried out in this zone, and seek the zone the most similar to this head by normalization coupling;
To not comprising the zone of any one head barycenter in the previous frame, whether at first judge its circularity greater than the circularity threshold value, if greater than would directly think head zone and preserve;
Otherwise, is that the circle of R is made opening operation to it to this zone with radius, and R represents the average head size, to its average gray threshold value of region decision of obtaining whether less than head average gray threshold value, and circularity is greater than the circularity threshold value, if meet then think head and preserve.Otherwise get G nCarry out region growing once more, find the head zone of separating by feature extraction;
Preserve the initial horizontal coordinate of each head barycenter and the coordinate in present frame with an array;
Calculate the optical flow field of this image, obtain the moving region, each moving region and the detected head of region-growing method are done and computing, non-NULL is then deleted as a result, otherwise keeps;
Adopt the detected head of its algorithm to carry out the most contiguous coupling respectively with in the previous frame to detected head of region-growing method and the detected head of optical flow method, upgrade the position of each head;
Judge whether each head crosses counting line, if, calculate the distance that it is passed by on the counting line direction, if greater than threshold value then count, otherwise do not process.
2. public traffice passenger flow statistical method according to claim 1, it is characterized in that: after first time region growing, judge whether every zone comprises the barycenter of certain head that the previous frame image detection arrives, if comprise, by a series of growth threshold value and normalization form fit, extract the head zone the most similar with the previous frame number of people.
3. public traffice passenger flow statistical method according to claim 1 is characterized in that: for special case such as being branded as, adopt optical flow method to detect movement human; Optical flow method and region growing algorithm synthesis use, respectively counting.
4. public traffice passenger flow statistical method according to claim 1, it is characterized in that: artificial two counting lines of getting on or off the bus of delimiting, when tracking target is crossed counting line, judge that whether distance that target passes by is greater than pre-determined threshold value on the counting line direction, count greater than then producing, otherwise do not count.
CN2010101952121A 2010-06-08 2010-06-08 Public traffice passenger flow statistical method Expired - Fee Related CN101908236B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010101952121A CN101908236B (en) 2010-06-08 2010-06-08 Public traffice passenger flow statistical method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010101952121A CN101908236B (en) 2010-06-08 2010-06-08 Public traffice passenger flow statistical method

Publications (2)

Publication Number Publication Date
CN101908236A true CN101908236A (en) 2010-12-08
CN101908236B CN101908236B (en) 2012-03-21

Family

ID=43263687

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010101952121A Expired - Fee Related CN101908236B (en) 2010-06-08 2010-06-08 Public traffice passenger flow statistical method

Country Status (1)

Country Link
CN (1) CN101908236B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156985A (en) * 2011-04-11 2011-08-17 上海交通大学 Method for counting pedestrians and vehicles based on virtual gate
CN103426020A (en) * 2013-08-14 2013-12-04 北京凯森世纪科技发展有限公司 Exclusive infrared passenger flow counter
CN104821025A (en) * 2015-04-29 2015-08-05 广州运星科技有限公司 Passenger flow detection method and detection system thereof
CN107610282A (en) * 2017-08-21 2018-01-19 深圳市海梁科技有限公司 A kind of bus passenger flow statistical system
IT201700014889A1 (en) * 2017-02-10 2018-08-10 Iveco France Sas FORECAST SYSTEM FOR THE RESIDUAL AUTONOMY OF AN ELECTRIC VEHICLE
CN109784296A (en) * 2019-01-27 2019-05-21 武汉星巡智能科技有限公司 Bus occupant quantity statistics method, device and computer readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1568489A (en) * 2001-10-17 2005-01-19 拜尔丹泰梯系统公司 Face imaging system for recordal and automated identity confirmation
CN1897015A (en) * 2006-05-18 2007-01-17 王海燕 Method and system for inspecting and tracting vehicle based on machine vision

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1568489A (en) * 2001-10-17 2005-01-19 拜尔丹泰梯系统公司 Face imaging system for recordal and automated identity confirmation
CN1897015A (en) * 2006-05-18 2007-01-17 王海燕 Method and system for inspecting and tracting vehicle based on machine vision

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
《信息与电脑》 20091231 王秋艳,徐燕凌 《基于区域生长和全局运动估计的视频对象提取》 第44,47页 1-4 , 第12期 2 *
《宿州学院学报》 20060831 李凌 《图像分割方法研究与实现》 第85-88页 1-4 第21卷, 第4期 2 *
《计算机仿真》 20061031 万缨 等 《运动目标检测算法的探讨》 第221-226页 1-4 第23卷, 第10期 2 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156985A (en) * 2011-04-11 2011-08-17 上海交通大学 Method for counting pedestrians and vehicles based on virtual gate
CN103426020A (en) * 2013-08-14 2013-12-04 北京凯森世纪科技发展有限公司 Exclusive infrared passenger flow counter
CN104821025A (en) * 2015-04-29 2015-08-05 广州运星科技有限公司 Passenger flow detection method and detection system thereof
CN104821025B (en) * 2015-04-29 2018-01-19 广州运星科技有限公司 Passenger flow detection method and its detecting system
IT201700014889A1 (en) * 2017-02-10 2018-08-10 Iveco France Sas FORECAST SYSTEM FOR THE RESIDUAL AUTONOMY OF AN ELECTRIC VEHICLE
EP3360722A1 (en) * 2017-02-10 2018-08-15 Iveco France S.A.S. System for predicting the remaining autonomy of an electric vehicle
CN107610282A (en) * 2017-08-21 2018-01-19 深圳市海梁科技有限公司 A kind of bus passenger flow statistical system
CN109784296A (en) * 2019-01-27 2019-05-21 武汉星巡智能科技有限公司 Bus occupant quantity statistics method, device and computer readable storage medium

Also Published As

Publication number Publication date
CN101908236B (en) 2012-03-21

Similar Documents

Publication Publication Date Title
CN104200485B (en) Video-monitoring-oriented human body tracking method
CN107644429B (en) Video segmentation method based on strong target constraint video saliency
CN101908236B (en) Public traffice passenger flow statistical method
CN104751491B (en) A kind of crowd&#39;s tracking and people flow rate statistical method and device
CN103106667B (en) A kind of towards blocking the Moving Objects method for tracing with scene change
CN107403436B (en) Figure outline rapid detection and tracking method based on depth image
CN106203513B (en) A kind of statistical method based on pedestrian&#39;s head and shoulder multi-target detection and tracking
CN110517288A (en) Real-time target detecting and tracking method based on panorama multichannel 4k video image
CN106600625A (en) Image processing method and device for detecting small-sized living thing
CN104978567B (en) Vehicle checking method based on scene classification
KR101868903B1 (en) Apparatus and method for tracking human hand by using color features
CN105740945A (en) People counting method based on video analysis
CN102103748A (en) Method for detecting and tracking infrared small target in complex background
CN102622769A (en) Multi-target tracking method by taking depth as leading clue under dynamic scene
CN101916448A (en) Moving object detecting method based on Bayesian frame and LBP (Local Binary Pattern)
CN109919053A (en) A kind of deep learning vehicle parking detection method based on monitor video
CN107273905A (en) A kind of target active contour tracing method of combination movable information
CN102855466B (en) A kind of demographic method based on Computer Vision
CN112488057A (en) Single-camera multi-target tracking method utilizing human head point positioning and joint point information
CN112446882A (en) Robust visual SLAM method based on deep learning in dynamic scene
CN104063692A (en) Method and system for pedestrian positioning detection
CN106056078A (en) Crowd density estimation method based on multi-feature regression ensemble learning
CN107103301A (en) Video object space-time maximum stability identification color region matching process and system
CN110363197A (en) Based on the video area-of-interest exacting method for improving visual background extraction model
CN109949344A (en) It is a kind of to suggest that the nuclear phase of window closes filter tracking method based on color probability target

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
C53 Correction of patent for invention or patent application
CB03 Change of inventor or designer information

Inventor after: Xu Jianhua

Inventor after: Chen Xiaorong

Inventor after: Dai Shuguang

Inventor after: Mu Pingan

Inventor before: Xu Jianhua

Inventor before: Chen Xiaorong

Inventor before: Dai Shuguang

Inventor before: Mu Pingan

SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120321

Termination date: 20140608

EXPY Termination of patent right or utility model