CN100502463C - Method for collecting characteristics in telecommunication flow information video detection - Google Patents
Method for collecting characteristics in telecommunication flow information video detection Download PDFInfo
- Publication number
- CN100502463C CN100502463C CNB2005100620043A CN200510062004A CN100502463C CN 100502463 C CN100502463 C CN 100502463C CN B2005100620043 A CNB2005100620043 A CN B2005100620043A CN 200510062004 A CN200510062004 A CN 200510062004A CN 100502463 C CN100502463 C CN 100502463C
- Authority
- CN
- China
- Prior art keywords
- formula
- pixel
- value
- gaussian
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
In the invention, the detecting system comprises a video camera and a signal processor. It uses an improved Gaussian mixture distribution model to characterize each pixel in image frame; the characterized feature only uses a brightness feature; if there isn't a moving object (vehicle), the video image is relatively static; each pixel obeys a statistic model along with time variation; when getting a new image frame, the Gaussian mixture distribution model is renewed, if the pixel in current image match the Gaussian mixture distribution model, then determining the pixel is a background point; otherwise, determining the pixel is a foreground point.
Description
(1) technical field
The present invention relates to a kind of telecommunication flow information video detecting method, the method for collecting characteristics in especially a kind of Video Detection.
(2) background technology
The population in city and vehicle are also in sharp increase, the magnitude of traffic flow strengthens day by day, congested in traffic clogging is on the rise, traffic system is faced with immense pressure, traffic problems have become the subject matter in the city management work, hinder and restricting the development of urban economy construction, become global common issue with gradually.In the face of serious day by day traffic problems, we can not rely on merely, as build and reconstruct road and adopt measures such as signal lamp control to alleviate this situation.
The telecommunication flow information collection is an important step in the intelligent transportation system, and the information of collection comprises vehicle flowrate, the speed of a motor vehicle, vehicle classification, road occupying rate, traffic density, and vehicle queue length, turn inside diameter, vehicle stops or the information of the situation of causing trouble.Since 1970, expert, scholar have developed a lot of traffic information collection equipment both at home and abroad, as velocity radar, annular magnetic test coil, supersonic detector, traffic microwave detector etc.Practical application shows that these several traffic information collection modes have following shortcoming: (1) accuracy of detection and reliability are not high; (2) be not suitable for detecting on a large scale; (3) amount of traffic information of obtaining is less; (4) can't show that vehicle, car plate and traffic scene etc. punish vital information for traffic study and analysis, traffic.Therefore owing to be subjected to the restriction of aspects such as detection range, detectability and reliability, traditional wagon detector can not satisfy the requirement of present traffic system.
Early stage most of video detection technology is to adopt the virtual coil method, as AUTOSCOPE, CCATS, TAS, IMPACTS, TrafficCam etc., its operation principle is similar to buried coil checker, and automobile video frequency tracking at present is the pixel that meets vehicle characteristics in the traffic scene image by identifying, and carries out image segmentation, and the vehicle in the frame before and after mating according to the feature that extracts, thereby calculate traffic parameter.The problem of signature tracking is that vehicle can not guarantee identical in the feature of highway section diverse location because image is subjected to the influence of surrounding environment (as building shade, street lamp).From image sequence, detect movable information, the recognition and tracking moving target is most important and the technology of most critical, what adopt at present is outstanding target or the thought of eliminating background, and inter-frame difference, background difference and three kinds of methods of optical flow method are roughly arranged.
Inter-frame difference, this method has very strong adaptivity, but the choosing the right moment of successive frame of doing difference had relatively high expectations, and depends on the movement velocity of moving object, if movement velocity is very fast and cross all not all right slowly.The calculating of optical flow method is very complicated, if there is not hardware to help, is difficult to satisfy the requirement of system real time.People such as P.Bouthem and D.Murray has also adopted this analysis means to cut apart motion.
Though it is fairly simple that the background difference on the ordinary meaning implements, adaptive ability is relatively poor, and some dynamic variations with some interference can not avoid.The effect that background is eliminated is most important to the realization of whole system.Forefathers have proposed the lot of background elimination algorithm: wherein have based on forecast method, as kalman filter method, wiener filter approaches etc., but this type of algorithm is not considered the application of depth information.The adaptive background elimination algorithm based on mixture gaussian modelling that people such as Harville propose is considered the adaptivity of the degree of depth, colouring information and time, improved the segmentation effect of system, but the operand of algorithm is big, and real-time is poor.
(3) summary of the invention
For the deficiency that image is affected by environment easily in the signature tracking that overcomes prior art traffic flow video detection method, real-time is poor, processing speed is slower, the invention provides the method for collecting characteristics in a kind of variation that conforms, real-time, the telecommunication flow information Video Detection that processing speed is fast.
The technical solution adopted for the present invention to solve the technical problems is:
Method for collecting characteristics in a kind of telecommunication flow information Video Detection, this detection system comprises video camera, signal processor, video camera inputted video image sequence be 1,2 ..., t ..., promptly in the t frame video image, handle the value x of i pixel
I, t[R
I, t, G
I, t, B
I, t], the probability density function of k Gaussian Profile is formula (1):
K refers to t k Gaussian Profile of i pixel constantly, μ in the formula
kThe expression mean value vector, ∑
kThe expression covariance matrix;
The current feature of pixel i and this pixel constantly feature in the past are relevant, and its probability calculation formula is formula (2):
T is a time point in the formula, μ
I, t-1, kAnd ∑
I, t-1, kBe respectively mean value vector and the covariance matrix of pixel i in t-1 k Gaussian distribution model constantly, ω
I, t-1, kFor with the corresponding weights of Gaussian Profile;
This method may further comprise the steps:
(1), the video image of acquisition camera, obtain R, G, B color space image sequence, with medium filtering image filtering is removed and makes an uproar;
(2), the processed images sequence is carried out the conversion of color space, R, G, B color space are changed into brightness space, S (brightness space)=(R+G+B)/3;
(3), set the parameter of mixed Gaussian algorithm, described parameter comprises global context threshold value T, learning rate α, Gaussian distribution model number K, initializes weights ω;
(4), the brightness space of reading images, with the brightness value of first each pixel of the two field picture average as mixed Gaussian, variance is set up single Gaussian mixture model-universal background model for predetermined empirical value;
(5), read the t two field picture, with the existing k of each pixel and this pixel (k<=K) individual Gauss model is compared, and relatively whether following formula (3) is set up:
|x
i,t—μ
i,t,k|<2.5σ
2 i,t,k (3)
σ in the formula
2 I, t, kBe the brightness variance of pixel i in t k Gaussian distribution model constantly;
(5.1) if coupling is upgraded the parameter and the weight of k mixed Gauss model, parameter comprises expectation, variance, referring to formula (4), (5), (6):
μ
i,t,k=(1-ρ)μ
i,t-1,k+ρx
i,t,k (4)
σ
2 i,t,k=(1-ρ)σ
2 i,t-1,k+ρ(x
i,t-μ
i,t,k)
T(x
i,t-μ
i,t,k) (5)
ρ=aη
k(x
i,t|μ
i,t,k,σ
k) (6)
μ in the formula
I, t, kAnd σ
2 I, t, kBe respectively mean value vector and the brightness variance of pixel i in t k Gaussian distribution model constantly, α is a learning rate, and ρ is the study factor that model adapts to, and acts on similar to a;
(5.2) if do not match, and k<K, increase the Gauss model of t two field picture, new Gauss model distributes and gets X
I, tValue be average, variance, weights omega are empirical value;
(5.3) if do not match, and k=K, replacing the minimum Gaussian Profile of weight in K the Gauss model distribution with new Gaussian Profile, new Gauss model distributes and gets X
I, tValue be average, variance, weights omega are empirical value;
(5.4), the more new formula of weights omega is formula (7):
ω
k,t=(1-α)ω
k,t-1+α(M
k,t) (7)
In the following formula, ω
K, tBe current weight, α is a learning rate, ω
K, t-1Respective weights M for previous frame
K, tBe the coupling quantized value, if coupling: M
K, t=1, if do not match: M
K, t=0;
(6), background model and the present image of setting up carried out the absolute value difference, extract the feature of moving target, obtaining vehicle ' s contour, tracking parameter through processing.
Further, in described step (6), the feature of extracting moving target comprises:
(6.1), calculate the area of moving region: the length of side of establishing square pixels is h, and the area S computing formula of regional A is formula (8):
In the following formula, point (x, y) belong in the regional A have a few;
(6.2), center, zoning: participate in formula (9), (10) according to the institute's center of gravity formula that calculates a little in the All Ranges A:
(6.3), calculate the length and the width of moving target: the minimum boundary rectangle MER that uses object, the border of object is with the increment rotation of predetermined angle, behind increment of each rotation, rectangle MER with a horizontal positioned comes its border of match, note minimum and maximum X, the Y value of rotation back boundary point, area as MER reaches minimum value, and described MER size can be the length and the width of target.
Further again, in described step (6), the feature of extracting moving target also comprises: (6.4), bending moment not: for digital image function f (x, y), calculate with all points that belong in the zone,, judge that its each rank square exists if the point of limited on the XY plane is non-vanishing continuously and only in its segmentation.
Further, in described step (6), extract before the feature of moving target, earlier to moving image binary conversion treatment and image expansion, corrosion, dilation operation is an aperture of filling up the target area, and all background dots that will contact with object merge to the process in this object; Corrosion is to remove isolated noise foreground point, eliminates all boundary points of object; Obtain the profile of moving vehicle, it is kept in the profile attributes of self-defined structure body.
In described (3), global context threshold value T=0.7, the general span of learning rate α [0.01,0.001], the general span [3,5] of K, initializes weights ω gets ω=0.05.
Operation principle of the present invention is: the luminance component in using covariance matrix, because interference of noise is bigger to the colourity informational influence, and it is smaller to the monochrome information influence, sacrifice chrominance information, the real-time of whole Traffic Flux Information Detection system improves greatly, but little to the influence of target extraction, by current image frame and background model are carried out difference, thereby obtain the accurate movement vehicle target, handle and image expansion by image binaryzation, corrosion, obtain the profile of moving vehicle, extract region area, regional barycenter, the minimum boundary rectangle (MER) of vehicle ' s contour and bending moment not, by control to region area, differentiating moving object is human body or vehicle, or other disturbance factor, the regional barycenter of extraction, the minimum boundary rectangle (MER) of vehicle ' s contour and not bending moment realize the detection and tracking of moving vehicle real-time and effective.
Beneficial effect of the present invention mainly shows: 1, the variation that can conform can overcome Changes in weather rainy, that mist is arranged and light variation slowly; 2, the computational speed of algorithm is fast, real-time is high, and the per second kind can be handled 16~17 frames; 3, simple to operate; 4, by video detection technology, detect the road traffic stream information in real time, detect road traffic condition, recording traffic flow data and road traffic condition information.
(4) description of drawings
Fig. 1 is the system flow frame diagram of method for collecting characteristics in the telecommunication flow information Video Detection.
Fig. 2 is based on the flow chart that improves mixed Gaussian background model collection apparatus.
(5) embodiment
Below in conjunction with accompanying drawing the present invention is further described.
With reference to Fig. 1, Fig. 2, method for collecting characteristics in a kind of telecommunication flow information Video Detection, this detection system comprises video camera, signal processor, following each processes pixel of narrating all is a series of under fixed cameras, and the inputted video image sequence is { 1,2, t ... promptly in the t frame video image, handle the value x of i pixel
I, t[R
I, t, G
I, t, B
I, t], the probability density function of k Gaussian Profile is formula (1):
μ in the formula
kThe expression mean value vector, ∑
kThe expression covariance matrix.
The current feature of pixel i and this pixel constantly feature in the past are relevant, and its probability calculation formula is formula (2):
T is a time point in the formula, μ
I, t-1, kAnd ∑
I, t-1, kBe respectively mean value vector and the covariance matrix of pixel i in t-1 k Gaussian distribution model constantly, ω
I, t-1, kFor with the corresponding weights of Gaussian Profile.
This method for collecting characteristics may further comprise the steps:
(1), get R, G, B color space image sequence, comprise noise signal certainly, with medium filtering image filtering is removed and make an uproar from the high resolution CCD video camera.Medium filtering is a kind of way commonly used of removing the image random noise, makes an uproar because in the traffic flow image, use low-pass filtering to remove, because edge contour contains a large amount of high-frequency informations, so, in the time of filtered noise, the border is fogged; Use high-pass filtering, in filtering, noise also has been reinforced, so suppress noise signal with the median filtering method in the spatial alternation, suppresses the noise in the image and has kept the clear of profile.
(2), the processed images sequence is carried out the conversion of color space, R, G, B color space are changed into brightness space, S (brightness space)=(R+G+B)/3.
(3), set the parameter of mixed Gaussian algorithm, described parameter comprises that global context threshold value T (having determined background distributions model number) generally gets T=0.7, the general span [0.01 of learning rate α, 0.001], here get α=0.005, the general span [3,5] of K, the value of K is big more, system can characterize complicated more scene, but the corresponding calculated amount increases, and this algorithm is got K=3, initializes weights ω generally gets a less value, assignment ω=0.05.
(4), the brightness space of reading images, with the brightness value of first each pixel of the two field picture average as mixed Gaussian, variance is given a bigger value, what generally give also is empirical value, here our assignment σ=20; Set up single Gaussian mixture model-universal background model (initialization of mixed Gauss model).
(5), read the t two field picture, with the existing k of each pixel and this pixel (k<=K) individual Gauss model is compared, and relatively whether following formula (3) is set up:
|x
i,t—μ
i,t,k|<2.5σ
2 i,t,k (3)
σ in the formula
2 I, t, kBe the brightness variance of pixel i in t k Gaussian distribution model constantly.
(5.1) if mate, then upgrade the parameter and the weight of k mixed Gauss model.
Parameter comprises expectation, variance, referring to formula (4), (5), (6):
μ
i,t,k=(1-ρ)μ
i,t-1,k+ρx
i,t,k (4)
σ
2 i,t,k=(1-ρ)σ
2 i,t-1,k+ρ(x
i,t-μ
i,t,k)
T(x
i,t-μ
i,t,k) (5)
ρ=aη
k(x
i,t|μ
i,t,k,σ
k) (6)
μ in the formula
I, t, kAnd σ
2 I, t, kBe respectively mean value vector and the brightness variance of pixel i in t k Gaussian distribution model constantly, α is a learning rate, and ρ is the study factor that model adapts to, and acts on similar to a.
(5.2) if do not match, and k<K, increasing a Gauss model, new Gauss model distributes and gets X
I, tValue be average, give bigger variance (empirical value) that we get σ=20, less weights ω=0.05 here.
(5.3) if do not match, and k=K, the minimum Gaussian Profile of weight replaced with new Gaussian Profile.The value of average and variance is the same.
(5.4), the more new formula of weights omega is formula (7):
ω
k,t=(1-α)ω
k,t-1+α(M
k,t) (7)
In the following formula, ω
K, tBe current weight, α is a learning rate, ω
K, t-1Respective weights for previous frame; M
K, tBe the coupling quantized value, if coupling: M
K, t=1, if do not match: M
K, t=0; Learning rate α wherein, if the α value is bigger, the ability that changes of conforming is more intense, what the background of change was very fast is dissolved in the background model, but affected by noise easily.If the α value is smaller, the ability that conforms is lower, temporary transient static object can be thought that the background that changes is dissolved into background model.
Read a new frame, repeat (5.1)~(5.4) step, set up the Gaussian Profile of background model.
(6), background model and the present image of setting up carried out the absolute value difference, extract the feature of moving target, obtaining vehicle ' s contour, tracking parameter through processing.
Earlier foreground target is carried out binary conversion treatment and image expansion, corrosion.Dilation operation is an aperture of filling up the target area, and all background dots that will contact with object merge to the process in this object.Corrosion is to remove isolated noise foreground point, eliminates a kind of process of all boundary points of object, by these two kinds of morphologic processing, obtains the more accurate profile of moving vehicle, and it is kept in the profile attributes of self-defined structure body.
By the extraction to target, we have just obtained interested target, also need clarification of objective is extracted and described.Present target signature can be divided into gray feature, textural characteristics and geometric properties.Introduce several features that adopt when native system is followed the tracks of below:
1) region area
Region area is an essential characteristic in zone, and he describes the size in zone, and the length of side of establishing square pixels is h, and the area S computing formula of regional A is formula (8):
Point (x, y) be belong in the regional A have a few.Can be by calculating the area of moving region, judge that whether moving target is disturbing factors such as non-vehicle, with crossing the width of judging moving vehicle, detects overlapping vehicle.When moving vehicle area the time, judge that then vehicle rolls the effective coverage away from less than certain threshold value.
2) regional barycenter
Regional barycenter is a kind of global description symbol, and the coordinate of regional barycenter is to calculate according to all points that belong to the zone, although always the coordinate integer of regional each point, but the coordinate Chang Buwei integer of regional barycenter.Relative with each interregional distance when very little in size of zone itself, the particle that the zone can be used for barycentric coodinates is similar to representative.
According to the institute's center of gravity formula that calculates a little in the All Ranges A with reference to formula (9), (10):
3) length and width
Calculating it after an object extracts from a sub-picture is easier in the span of level and vertical direction, as long as know that the minimum and maximum ranks of object are number just passable, but for the object that moves at random, level is interested direction with vertical and the top of differing. can use the minimum boundary rectangle (MER) of object in this case.
Use the MER technology, the border of object rotates to 900 with about 30 increment, behind increment of each rotation, MER with a horizontal positioned comes its border of match, in order to calculate needs, only need note minimum and maximum X, the Y value of rotation back boundary point, in certain anglec of rotation, the area of MER reaches minimum value.At this moment the size of MER can be used for representing the length width of this object.The MER anglec of rotation hour has been drawn the major axes orientation of this object.This technology is particularly useful for the object of rectangular shape, can provide satisfactory result for vehicle detection.Overlapping when vehicle, when movement velocity is too fast, when the moving vehicle profile is followed the tracks of, the mathematical feature of target is mated, can effectively improve the precision of tracking.
4) bending moment not
The square in zone also can be used as feature and considers on the plane of delineation, and (x, y), if limited point on the XY plane is non-vanishing continuously and only in its segmentation, then provable its each rank square exists for the digital image function f.The square in zone is to calculate with all points that belong in the zone, thereby not too is subjected to the influence of noise etc.
(7), extract the profile, center of gravity, area, minimum boundary rectangle (MER) of object and displacement not, these several key characters leave them in the self-structure body in,
(8), repeat (5)~(7) if new moving vehicle is arranged.
Use improved mixture gaussian modelling to come the feature of each pixel in the phenogram picture frame, the feature that characterizes only adopts brightness, if there is not moving target (vehicle) to exist, then video image is static relatively, each pixel changes in time all obeys certain statistical model, and each pixel is characterized by the mixed model of K Gaussian Profile in this algorithm.When obtaining new picture frame, upgrade mixture gaussian modelling, if the pixel of present image and mixture gaussian modelling are complementary, judge that then this point is a background dot, otherwise judge that this point is the foreground point.Background model and the present image set up are carried out the absolute value difference, the absolute value difference can prevent overflowing of pixel, the white point that image occurs, the better controlled noise, obtain more accurate vehicle ' s contour through handling, parameter with needs are followed the tracks of can reach good effect in the detection of the vehicle numeration and the speed of a motor vehicle.
Claims (5)
1, the method for collecting characteristics in a kind of telecommunication flow information Video Detection, this detection system comprises video camera, signal processor, video camera inputted video image sequence be 1,2 ..., t ..., promptly in the t frame video image, handle the value x of i pixel
I, t[R
I, t, G
I, t, B
I, t], the probability density function of k Gaussian Profile is formula (1):
K refers to t k Gaussian Profile of i pixel constantly, μ in the formula
kThe expression mean value vector, ∑
kThe expression covariance matrix;
The current feature of pixel i and this pixel constantly feature in the past are relevant, and its probability calculation formula is formula (2):
T is a time point in the formula, μ
I, t-1, kAnd ∑
I, t-1, kBe respectively mean value vector and the covariance matrix of pixel i in t-1 k Gaussian distribution model constantly, ω
I, t-1, kFor with the corresponding weights of Gaussian Profile;
This method may further comprise the steps:
(1), the video image of acquisition camera, obtain R, G, B color space image sequence, with medium filtering image filtering is removed and makes an uproar;
(2), the processed images sequence is carried out the conversion of color space, R, G, B color space are changed into brightness space, S (brightness space)=(R+G+B)/3;
(3), set the parameter of mixed Gaussian algorithm, described parameter comprises global context threshold value T, learning rate α, Gaussian distribution model number K, initializes weights ω;
(4), the brightness space of reading images, with the brightness value of first each pixel of the two field picture average as mixed Gaussian, variance is set up single Gaussian mixture model-universal background model for predetermined empirical value;
(5), read the t two field picture, with the existing k of each pixel and this pixel (k<=K) individual Gauss model is compared, and relatively whether following formula (3) is set up:
|x
i,t—μ
i,t,k|<2.5σ
2 i,t,k (3)
In the formula, σ
2 I, t, kBe the brightness variance of pixel i in t k Gaussian distribution model constantly;
(5.1) if coupling is upgraded the parameter and the weight of k mixed Gauss model, parameter comprises expectation, variance, referring to formula (4), (5), (6):
μ
i,t,k=(1-ρ)μ
i,t-1,k+ρx
i,t,k (4)
σ
2 i,t,k=(1-ρ)σ
2 i,t-1,k+ρ(x
i,t-μ
i,t,k)
T(x
i,t-μ
i,t,k) (5)
ρ=aη
k(x
i,t|μ
i,t,k,σ
k) (6)
μ in the formula
I, t, kAnd σ
2 I, t, kBe respectively mean value vector and the brightness variance of pixel i in t k Gaussian distribution model constantly, α is a learning rate, and ρ is the study factor that model adapts to, and acts on similar to a;
(5.2) if do not match, and k<K, increase the Gauss model of t two field picture, new Gauss model distributes and gets x
I, tValue be average, variance, weights omega are empirical value;
(5.3) if do not match, and k=K, replacing the minimum Gaussian Profile of weight in K the Gauss model distribution with new Gaussian Profile, new Gauss model distributes and gets x
I, tValue be average, variance, weights omega are empirical value;
(5.4), the more new formula of weights omega is formula (7):
ω
k,t=(1-α)ω
k,t-1+α(M
k,t) (7)
In the following formula, ω
K, tBe current weight, α is a learning rate, ω
K, t-1Respective weights for previous frame; M
K, tBe the coupling quantized value, if coupling: M
K, t=1, if do not match: M
K, t=0;
(6), background model and the present image of setting up carried out the absolute value difference, extract the feature of moving target, obtaining vehicle ' s contour, tracking parameter through processing.
2, the method for collecting characteristics in a kind of telecommunication flow information Video Detection as claimed in claim 1 is characterized in that: in described step (6), the feature of extracting moving target comprises:
(6.1), calculate the area of moving region: the length of side of establishing square pixels is h, and the area S computing formula of regional A is formula (8):
In the following formula, point (x, y) belong in the regional A have a few;
(6.2), center, zoning: according to the institute's center of gravity formula that calculates a little in the All Ranges A with reference to formula (9), (10):
(6.3), calculate the length and the width of moving target: the minimum boundary rectangle MER that uses object, the border of object is with the increment rotation of predetermined angle, behind increment of each rotation, rectangle MER with a horizontal positioned comes its border of match, note minimum and maximum X, the Y value of rotation back boundary point, area as MER reaches minimum value, and described MER size can be the length and the width of target.
3, the method for collecting characteristics in a kind of telecommunication flow information Video Detection as claimed in claim 2 is characterized in that: in described step (6), the feature of extracting moving target also comprises:
(6.4), bending moment not: (x y), calculates with all points that belong in the zone, if limited point on the XY plane is non-vanishing continuously and only in its segmentation, judges that its each rank square exists for the digital image function f.
4, as the method for collecting characteristics in the described a kind of telecommunication flow information Video Detection of one of claim 1-3, it is characterized in that: in described step (6), extract before the feature of moving target, earlier to moving image binary conversion treatment and image expansion, corrosion, dilation operation is an aperture of filling up the target area, and all background dots that will contact with object merge to the process in this object; Corrosion is to remove isolated noise foreground point, eliminates all boundary points of object; Obtain the profile of moving vehicle, it is kept in the profile attributes of self-defined structure body.
5, the method for collecting characteristics in a kind of telecommunication flow information Video Detection as claimed in claim 4, it is characterized in that: in described (3), global context threshold value T gets T=0.7, the general span [0.01 of learning rate α, 0.001], the general span [3,5] of K, initializes weights ω gets ω=0.05.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2005100620043A CN100502463C (en) | 2005-12-14 | 2005-12-14 | Method for collecting characteristics in telecommunication flow information video detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2005100620043A CN100502463C (en) | 2005-12-14 | 2005-12-14 | Method for collecting characteristics in telecommunication flow information video detection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN1984236A CN1984236A (en) | 2007-06-20 |
CN100502463C true CN100502463C (en) | 2009-06-17 |
Family
ID=38166433
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB2005100620043A Active CN100502463C (en) | 2005-12-14 | 2005-12-14 | Method for collecting characteristics in telecommunication flow information video detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN100502463C (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102955940A (en) * | 2012-11-28 | 2013-03-06 | 山东电力集团公司济宁供电公司 | System and method for detecting power transmission line object |
Families Citing this family (58)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7929729B2 (en) * | 2007-04-02 | 2011-04-19 | Industrial Technology Research Institute | Image processing methods |
CN101431665B (en) * | 2007-11-08 | 2010-09-15 | 财团法人工业技术研究院 | Method and system for detecting and tracing object |
CN101437113B (en) * | 2007-11-14 | 2010-07-28 | 汉王科技股份有限公司 | Apparatus and method for detecting self-adapting inner core density estimation movement |
CN101448151B (en) * | 2007-11-28 | 2011-08-17 | 汉王科技股份有限公司 | Motion detecting device for estimating self-adapting inner core density and method therefor |
CN101527838B (en) * | 2008-03-04 | 2010-12-08 | 华为技术有限公司 | Method and system for feedback-type object detection and tracing of video object |
CN101540103B (en) * | 2008-03-17 | 2013-06-19 | 上海宝康电子控制工程有限公司 | Method and system for traffic information acquisition and event processing |
CN101303732B (en) * | 2008-04-11 | 2011-06-22 | 西安交通大学 | Method for apperceiving and alarming movable target based on vehicle-mounted monocular camera |
CN101635026B (en) * | 2008-07-23 | 2012-05-23 | 中国科学院自动化研究所 | Method for detecting derelict without tracking process |
CN101872279B (en) * | 2009-04-23 | 2012-11-21 | 深圳富泰宏精密工业有限公司 | Electronic device and method for adjusting position of display image thereof |
CN101909145B (en) * | 2009-06-05 | 2012-03-28 | 鸿富锦精密工业(深圳)有限公司 | Image noise filtering system and method |
CN101639983B (en) * | 2009-08-21 | 2011-02-02 | 任雪梅 | Multilane traffic volume detection method based on image information entropy |
CN101799968B (en) * | 2010-01-13 | 2013-06-05 | 李秋华 | Detection method and device for oil well intrusion based on video image intelligent analysis |
CN102236968B (en) * | 2010-05-05 | 2015-08-19 | 刘嘉 | Intelligent remote monitoring system for transport vehicle |
CN101883209B (en) * | 2010-05-31 | 2012-09-12 | 中山大学 | Method for integrating background model and three-frame difference to detect video background |
CN101882311A (en) * | 2010-06-08 | 2010-11-10 | 中国科学院自动化研究所 | Background modeling acceleration method based on CUDA (Compute Unified Device Architecture) technology |
CN101916447B (en) * | 2010-07-29 | 2012-08-15 | 江苏大学 | Robust motion target detecting and tracking image processing system |
CN102385705B (en) * | 2010-09-02 | 2013-09-18 | 大猩猩科技股份有限公司 | Abnormal behavior detection system and method by utilizing automatic multi-feature clustering method |
CN101964113A (en) * | 2010-10-02 | 2011-02-02 | 上海交通大学 | Method for detecting moving target in illuminance abrupt variation scene |
CN101980300B (en) * | 2010-10-29 | 2012-07-04 | 杭州电子科技大学 | 3G smart phone-based motion detection method |
CN102043950B (en) * | 2010-12-30 | 2012-11-28 | 南京信息工程大学 | Vehicle outline recognition method based on canny operator and marginal point statistic |
CN102081802A (en) * | 2011-01-26 | 2011-06-01 | 北京中星微电子有限公司 | Method and device for detecting color card based on block matching |
CN102521580A (en) * | 2011-12-21 | 2012-06-27 | 华平信息技术(南昌)有限公司 | Real-time target matching tracking method and system |
US9070019B2 (en) | 2012-01-17 | 2015-06-30 | Leap Motion, Inc. | Systems and methods for capturing motion in three-dimensional space |
US9679215B2 (en) | 2012-01-17 | 2017-06-13 | Leap Motion, Inc. | Systems and methods for machine control |
US8638989B2 (en) | 2012-01-17 | 2014-01-28 | Leap Motion, Inc. | Systems and methods for capturing motion in three-dimensional space |
US9501152B2 (en) | 2013-01-15 | 2016-11-22 | Leap Motion, Inc. | Free-space user interface and control using virtual constructs |
US11493998B2 (en) | 2012-01-17 | 2022-11-08 | Ultrahaptics IP Two Limited | Systems and methods for machine control |
US8693731B2 (en) | 2012-01-17 | 2014-04-08 | Leap Motion, Inc. | Enhanced contrast for object detection and characterization by optical imaging |
US10691219B2 (en) | 2012-01-17 | 2020-06-23 | Ultrahaptics IP Two Limited | Systems and methods for machine control |
RU2506640C2 (en) * | 2012-03-12 | 2014-02-10 | Государственное казенное образовательное учреждение высшего профессионального образования Академия Федеральной службы охраны Российской Федерации (Академия ФСО России) | Method of identifying insert frames in multimedia data stream |
CN102693637B (en) * | 2012-06-12 | 2014-09-03 | 北京联合大学 | Signal lamp for prompting right-turn vehicle to avoid pedestrians at crossroad |
CN102799857B (en) * | 2012-06-19 | 2014-12-17 | 东南大学 | Video multi-vehicle outline detection method |
CN102867193B (en) * | 2012-09-14 | 2015-06-17 | 成都国科海博信息技术股份有限公司 | Biological detection method and device and biological detector |
US9285893B2 (en) | 2012-11-08 | 2016-03-15 | Leap Motion, Inc. | Object detection and tracking with variable-field illumination devices |
US10609285B2 (en) | 2013-01-07 | 2020-03-31 | Ultrahaptics IP Two Limited | Power consumption in motion-capture systems |
US9465461B2 (en) | 2013-01-08 | 2016-10-11 | Leap Motion, Inc. | Object detection and tracking with audio and optical signals |
US9459697B2 (en) | 2013-01-15 | 2016-10-04 | Leap Motion, Inc. | Dynamic, free-space user interactions for machine control |
CN103150738A (en) * | 2013-02-02 | 2013-06-12 | 南京理工大学 | Detection method of moving objects of distributed multisensor |
US9702977B2 (en) | 2013-03-15 | 2017-07-11 | Leap Motion, Inc. | Determining positional information of an object in space |
US9916009B2 (en) | 2013-04-26 | 2018-03-13 | Leap Motion, Inc. | Non-tactile interface systems and methods |
CN103272783B (en) * | 2013-06-21 | 2015-11-04 | 核工业理化工程研究院华核新技术开发公司 | Colored CCD color selector color judges and separation method |
US9721383B1 (en) | 2013-08-29 | 2017-08-01 | Leap Motion, Inc. | Predictive information for free space gesture control and communication |
US9632572B2 (en) | 2013-10-03 | 2017-04-25 | Leap Motion, Inc. | Enhanced field of view to augment three-dimensional (3D) sensory space for free-space gesture interpretation |
US9996638B1 (en) | 2013-10-31 | 2018-06-12 | Leap Motion, Inc. | Predictive information for free space gesture control and communication |
CN103646544B (en) * | 2013-11-15 | 2016-03-09 | 天津天地伟业数码科技有限公司 | Based on the vehicle behavioural analysis recognition methods of The Cloud Terrace and camera apparatus |
CN103578121B (en) * | 2013-11-22 | 2016-08-17 | 南京信大气象装备有限公司 | Method for testing motion based on shared Gauss model under disturbed motion environment |
US9613262B2 (en) | 2014-01-15 | 2017-04-04 | Leap Motion, Inc. | Object detection and tracking for providing a virtual device experience |
CN104036288A (en) * | 2014-05-30 | 2014-09-10 | 宁波海视智能系统有限公司 | Vehicle type classification method based on videos |
CN204480228U (en) | 2014-08-08 | 2015-07-15 | 厉动公司 | motion sensing and imaging device |
CN105472204B (en) * | 2014-09-05 | 2018-12-14 | 南京理工大学 | Noise reducing method based on motion detection |
CN104267209B (en) * | 2014-10-24 | 2017-01-11 | 浙江力石科技股份有限公司 | Method and system for expressway video speed measurement based on virtual coils |
CN104950285B (en) * | 2015-06-02 | 2017-08-25 | 西安理工大学 | A kind of RFID indoor orientation methods changed based on neighbour's label signal difference |
CN106412501B (en) * | 2016-09-20 | 2019-07-23 | 华中科技大学 | A kind of the construction safety behavior intelligent monitor system and its monitoring method of video |
CN109146914B (en) * | 2018-06-20 | 2023-05-30 | 上海市政工程设计研究总院(集团)有限公司 | Drunk driving behavior early warning method for expressway based on video analysis |
CN109035205A (en) * | 2018-06-27 | 2018-12-18 | 清华大学苏州汽车研究院(吴江) | Water hyacinth contamination detection method based on video analysis |
CN108694833A (en) * | 2018-07-17 | 2018-10-23 | 重庆交通大学 | Traffic abnormal incident detecting system based on binary sensor |
US11452911B2 (en) * | 2019-02-22 | 2022-09-27 | Trackman A/S | System and method for driving range shot travel path characteristics |
CN113286194A (en) * | 2020-02-20 | 2021-08-20 | 北京三星通信技术研究有限公司 | Video processing method and device, electronic equipment and readable storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1302427A (en) * | 1997-11-03 | 2001-07-04 | T-内提克斯公司 | Model adaptation system and method for speaker verification |
EP1241661A1 (en) * | 2001-03-13 | 2002-09-18 | Nec Corporation | Speech recognition apparatus |
CN1700238A (en) * | 2005-06-23 | 2005-11-23 | 复旦大学 | Method for dividing human body skin area from color digital images and video graphs |
-
2005
- 2005-12-14 CN CNB2005100620043A patent/CN100502463C/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1302427A (en) * | 1997-11-03 | 2001-07-04 | T-内提克斯公司 | Model adaptation system and method for speaker verification |
EP1241661A1 (en) * | 2001-03-13 | 2002-09-18 | Nec Corporation | Speech recognition apparatus |
CN1700238A (en) * | 2005-06-23 | 2005-11-23 | 复旦大学 | Method for dividing human body skin area from color digital images and video graphs |
Non-Patent Citations (2)
Title |
---|
一种改进的基于混合高斯分布模型的自适应背景消除算法. 王亮生,程荫杭.北方交通大学学报,第27卷第6期. 2003 * |
视频图像中运动目标的实时检测. 张旭东,钱玮,高隽,方廷健.系统工程与电子技术,第27卷第3期. 2005 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102955940A (en) * | 2012-11-28 | 2013-03-06 | 山东电力集团公司济宁供电公司 | System and method for detecting power transmission line object |
Also Published As
Publication number | Publication date |
---|---|
CN1984236A (en) | 2007-06-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN100502463C (en) | Method for collecting characteristics in telecommunication flow information video detection | |
CN110178167B (en) | Intersection violation video identification method based on cooperative relay of cameras | |
CN108983219B (en) | Fusion method and system for image information and radar information of traffic scene | |
CN102768804B (en) | Video-based traffic information acquisition method | |
CN102542289B (en) | Pedestrian volume statistical method based on plurality of Gaussian counting models | |
CN110210451B (en) | Zebra crossing detection method | |
Liu et al. | A survey of vision-based vehicle detection and tracking techniques in ITS | |
CN110008932A (en) | A kind of vehicle violation crimping detection method based on computer vision | |
CN101286239A (en) | Aerial shooting traffic video frequency vehicle rapid checking method | |
CN105574488A (en) | Low-altitude aerial infrared image based pedestrian detection method | |
CN104239867A (en) | License plate locating method and system | |
CN105654091A (en) | Detection method and apparatus for sea-surface target | |
CN105069441A (en) | Moving vehicle detection method based on background updating and particle swarm optimization algorithm | |
CN105574895A (en) | Congestion detection method during the dynamic driving process of vehicle | |
CN110717886A (en) | Pavement pool detection method based on machine vision in complex environment | |
CN104574381A (en) | Full reference image quality evaluation method based on LBP (local binary pattern) | |
CN103794050A (en) | Real-time transport vehicle detecting and tracking method | |
CN106778540A (en) | Parking detection is accurately based on the parking event detecting method of background double layer | |
CN104537649A (en) | Vehicle steering judgment method and system based on image ambiguity comparison | |
CN113378690A (en) | In-road irregular parking identification method based on video data | |
CN106056078A (en) | Crowd density estimation method based on multi-feature regression ensemble learning | |
Lee et al. | Real-time automatic vehicle management system using vehicle tracking and car plate number identification | |
Liu et al. | Effective road lane detection and tracking method using line segment detector | |
CN113516853B (en) | Multi-lane traffic flow detection method for complex monitoring scene | |
CN111652033A (en) | Lane line detection method based on OpenCV |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |