CN1984236A - Method for collecting characteristics in telecommunication flow information video detection - Google Patents

Method for collecting characteristics in telecommunication flow information video detection Download PDF

Info

Publication number
CN1984236A
CN1984236A CNA2005100620043A CN200510062004A CN1984236A CN 1984236 A CN1984236 A CN 1984236A CN A2005100620043 A CNA2005100620043 A CN A2005100620043A CN 200510062004 A CN200510062004 A CN 200510062004A CN 1984236 A CN1984236 A CN 1984236A
Authority
CN
China
Prior art keywords
value
pixel
image
formula
gaussian
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2005100620043A
Other languages
Chinese (zh)
Other versions
CN100502463C (en
Inventor
赵燕伟
胡峰俊
董红召
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CNB2005100620043A priority Critical patent/CN100502463C/en
Publication of CN1984236A publication Critical patent/CN1984236A/en
Application granted granted Critical
Publication of CN100502463C publication Critical patent/CN100502463C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

In the invention, the detecting system comprises a video camera and a signal processor. It uses an improved Gaussian mixture distribution model to characterize each pixel in image frame; the characterized feature only uses a brightness feature; if there isn't a moving object (vehicle), the video image is relatively static; each pixel obeys a statistic model along with time variation; when getting a new image frame, the Gaussian mixture distribution model is renewed, if the pixel in current image match the Gaussian mixture distribution model, then determining the pixel is a background point; otherwise, determining the pixel is a foreground point.

Description

Method for collecting characteristics in a kind of telecommunication flow information Video Detection
(1) technical field
The present invention relates to a kind of telecommunication flow information video detecting method, the method for collecting characteristics in especially a kind of Video Detection.
(2) background technology
The population in city and vehicle are also in sharp increase, the magnitude of traffic flow strengthens day by day, congested in traffic clogging is on the rise, traffic system is faced with immense pressure, traffic problems have become the subject matter in the city management work, hinder and restricting the development of urban economy construction, become global common issue with gradually.In the face of serious day by day traffic problems, we can not rely on merely, as build and reconstruct road and adopt measures such as signal lamp control to alleviate this situation.
The telecommunication flow information collection is an important step in the intelligent transportation system, and the information of collection comprises vehicle flowrate, the speed of a motor vehicle, vehicle classification, road occupying rate, traffic density, and vehicle queue length, turn inside diameter, vehicle stops or the information of the situation of causing trouble.Since 1970, expert, scholar have developed a lot of traffic information collection equipment both at home and abroad, as velocity radar, annular magnetic test coil, supersonic detector, traffic microwave detector etc.Practical application shows that these several traffic information collection modes have following shortcoming: (1) accuracy of detection and reliability are not high; (2) be not suitable for detecting on a large scale; (3) amount of traffic information of obtaining is less; (4) can't show that vehicle, car plate and traffic scene etc. punish vital information for traffic study and analysis, traffic.Therefore owing to be subjected to the restriction of aspects such as detection range, detectability and reliability, traditional wagon detector can not satisfy the requirement of present traffic system.
Early stage most of video detection technology is to adopt the virtual coil method, as AUTOSCOPE, CCATS, TAS, IMPACTS, TrafficCam etc., its operation principle is similar to buried coil checker, and automobile video frequency tracking at present is the pixel that meets vehicle characteristics in the traffic scene image by identifying, and carries out image segmentation, and the vehicle in the frame before and after mating according to the feature that extracts, thereby calculate traffic parameter.The problem of signature tracking is that vehicle can not guarantee identical in the feature of highway section diverse location because image is subjected to the influence of surrounding environment (as building shade, street lamp).From image sequence, detect movable information, the recognition and tracking moving target is most important and the technology of most critical, what adopt at present is outstanding target or the thought of eliminating background, and inter-frame difference, background difference and three kinds of methods of optical flow method are roughly arranged.
Inter-frame difference, this method has very strong adaptivity, but the choosing the right moment of successive frame of doing difference had relatively high expectations, and depends on the movement velocity of moving object, if movement velocity is very fast and cross all not all right slowly.The calculating of optical flow method is very complicated, if there is not hardware to help, is difficult to satisfy the requirement of system real time.People such as P.Bouthem and D.Murray has also adopted this analysis means to cut apart motion.
Though it is fairly simple that the background difference on the ordinary meaning implements, adaptive ability is relatively poor, and some dynamic variations with some interference can not avoid.The effect that background is eliminated is most important to the realization of whole system.Forefathers have proposed the lot of background elimination algorithm: wherein have based on forecast method, as kalman filter method, wiener filter approaches etc., but this type of algorithm is not considered the application of depth information.Harville[10] etc. the adaptive background elimination algorithm that proposes of people based on mixture gaussian modelling, consider the adaptivity of the degree of depth, colouring information and time, improved the segmentation effect of system, but the operand of algorithm is big, real-time is poor.
(3) summary of the invention
For the deficiency that image is affected by environment easily in the signature tracking that overcomes prior art traffic flow video detection method, real-time is poor, processing speed is slower, the invention provides the method for collecting characteristics in a kind of variation that conforms, real-time, the telecommunication flow information Video Detection that processing speed is fast.
The technical solution adopted for the present invention to solve the technical problems is:
Method for collecting characteristics in the one sharp telecommunication flow information Video Detection, this detection system comprises video camera, signal processor, video camera inputted video image sequence be 1,2 ..., t ..., promptly in the t frame video image, handle i pixel X I, t[R I, t, G I, t, B I, t] value, the probability density function of k Gaussian Profile is following formula (1):
Figure A20051006200400071
The probability calculation formula of current pixel point i is formula (2):
p ( X i , t ) Σ i = 1 k ω i , t - 1 , k * η k ( x t , μ i , t - 1 , Σ i , t - 1 , k ) - - - ( 2 )
This method may further comprise the steps:
(1), the video image of acquisition camera, obtain R, G, B color space image sequence, with medium filtering image filtering is removed and makes an uproar;
(2), the processed images sequence is carried out the conversion of color space, R, G, B color space are changed into brightness space, S (brightness space)=(R+G+B)/3;
(3), set the parameter of mixed Gaussian algorithm, described parameter comprises global context threshold value T, learning rate α, Gaussian distribution model number K, initializes weights ω;
(4), the brightness space of reading images, with the brightness value of first each pixel of the two field picture average as mixed Gaussian, variance is set up single Gaussian mixture model-universal background model for predetermined empirical value;
(5), read the t two field picture, with the existing k of each pixel and this pixel (k<=K) individual Gauss model is compared, and relatively whether following formula (3) is set up:
|X i,tk|<2.5σ k (3);
(5.1) if coupling is upgraded the parameter and the weight of k mixed Gauss model, parameter comprises expectation, variance, referring to following formula (4), (5), (6):
μ t=(1-ρ)μ t-1+ρX t (4)
σ t 2=(1-ρ)σ t-1 2+ρ(X tt) T(X tt) (5)
ρ=αη(X 1k,σ k) (6);
(5.2) if do not match, and k<K, increase the Gauss model of t two field picture, new Gauss model distributes and gets X I, tValue be average, variance, weights omega are empirical value;
(5.3) if do not match, and k=K, replacing the minimum Gaussian Profile of weight in K the Gauss model distribution with new Gaussian Profile, new Gauss model distributes and gets X I, tValue be average, variance, weights omega are empirical value;
(5.4), the more new formula of weights omega is (7):
ω k,t=(1-α)ω k,t-1+α(M k,t) (7)
In the following formula, ω K, tBe current weight, α is a learning rate, ω K, t-1Respective weights for previous frame; M K, tBe the coupling quantized value, if coupling: M K, t=1, if do not match: M K, t=0;
(6), background model and the present image of setting up carried out the absolute value difference, extract the feature of moving target, obtaining vehicle ' s contour, tracking parameter through processing.
Further, in described (6), the feature of extracting moving target comprises:
(6.1), calculate the area of moving region: the length of side of establishing square pixels is h, the area S computing formula following (8) of regional A:
S = Σ ( x , y ) ⋐ A h 2 - - - ( 8 )
In the following formula, point (x, y) belong in the regional A have a few;
(6.2), center, zoning: according to the center of gravity that calculates a little following (9), (10) in the All Ranges A:
x ‾ = 1 / S Σ ( x - y ) ∈ A x - - - ( 9 )
y ‾ = 1 / S Σ ( x - y ) ∈ A y - - - ( 10 )
(6.3), calculate the length and the width of moving target: the minimum boundary rectangle MER that uses object, the border of object is with the increment rotation of predetermined angle, behind increment of each rotation, rectangle MER with a horizontal positioned comes its border of match, note minimum and maximum X, the Y value of rotation back boundary point, area as MER reaches minimum value, and described MER size can be the length and the width of target.
Further again, in described (6), the feature of extracting moving target also comprises:
(6.4), bending moment not: (x y), calculates with all points that belong in the zone, if limited point on the XY plane is non-vanishing continuously and only in its segmentation, judges that its each rank square exists for the digital image function f.
Further, in described (6), extract before the feature of moving target, earlier to moving image binary conversion treatment and image expansion, corrosion, dilation operation is an aperture of filling up the target area, and all background dots that will contact with object merge to the process in this object; Corrosion is to remove isolated noise foreground point, eliminates all boundary points of object; Obtain the profile of moving vehicle, it is kept in the profile attributes of self-defined structure body.
In described (3), all background threshold T gets T=0.7, the general span of learning rate α [0.01,0.001], and the general span [3,5] of K, initializes weights ω gets ω=0.05.
Operation principle of the present invention is: the luminance component in using covariance matrix, because interference of noise is bigger to the colourity informational influence, and it is smaller to the monochrome information influence, sacrifice chrominance information, the real-time of whole Traffic Flux Information Detection system improves greatly, but little to the influence of target extraction, by current image frame and background model are carried out difference, thereby obtain the accurate movement vehicle target, handle and image expansion by image binaryzation, corrosion, obtain the profile of moving vehicle, extract region area, regional barycenter, the minimum boundary rectangle (MER) of vehicle ' s contour and bending moment not, by control to region area, differentiating moving object is human body or vehicle, or other disturbance factor, the regional barycenter of extraction, the minimum boundary rectangle (MER) of vehicle ' s contour and not bending moment realize the detection and tracking of moving vehicle real-time and effective.
Beneficial effect of the present invention mainly shows: 1, the variation that can conform can overcome Changes in weather rainy, that mist is arranged and light variation slowly; 2, the computational speed of algorithm is fast, real-time is high, and the per second kind can be handled 16~17 frames; 3, simple to operate; 4, by video detection technology, detect the road traffic stream information in real time, detect road traffic condition, recording traffic flow data and road traffic condition information.
(4) description of drawings
Fig. 1 is the system flow frame diagram of method for collecting characteristics in the telecommunication flow information Video Detection.
Fig. 2 is based on the flow chart that improves mixed Gaussian background model collection apparatus.
(5) embodiment
Below in conjunction with accompanying drawing the present invention is further described.
With reference to Fig. 1, Fig. 2, method for collecting characteristics in a kind of telecommunication flow information Video Detection, this detection system comprises video camera, signal processor, following each processes pixel of narrating all is a series of under fixed cameras, and the inputted video image sequence is { 1,2, t ... promptly in the t frame video image, handle i pixel X I, t[R I, t, G I, t, B I, t] value, wherein the probability density function of k Gaussian Profile is formula (1):
Figure A20051006200400101
The probability calculation formula of current pixel point i is formula (2):
P ( X i , t ) = Σ i = 1 k ω i , t - 1 , k * η k ( x t , μ i , t - 1 , Σ i , t - 1 , k ) - - - ( 2 )
This method for collecting characteristics may further comprise the steps:
(1), get R, G, B color space image sequence, comprise noise signal certainly, with medium filtering image filtering is removed and make an uproar from the high resolution CCD video camera.Medium filtering is a kind of way commonly used of removing the image random noise, makes an uproar because in the traffic flow image, use low-pass filtering to remove, because edge contour contains a large amount of high-frequency informations, so, in the time of filtered noise, the border is fogged; Use high-pass filtering, in filtering, noise also has been reinforced, so suppress noise signal with the median filtering method in the spatial alternation, suppresses the noise in the image and has kept the clear of profile.
(2), the processed images sequence is carried out the conversion of color space, R, G, B color space are changed into brightness space, S (brightness space)=(R+G+B)/3.
(3), set the parameter of mixed Gaussian algorithm, described parameter comprises that global context threshold value T (having determined background distributions model number) generally gets T=0.7, the general span [0.01 of learning rate α, 0.001], here get α=0.005, the general span [3,5] of K, the value of K is big more, system can characterize complicated more scene, but the corresponding calculated amount increases, and this algorithm is got K=3, initializes weights ω generally gets a less value, assignment ω=0.05.
(4), the brightness space of reading images, with the brightness value of first each pixel of the two field picture average as mixed Gaussian, variance is given a bigger value, what generally give also is empirical value, here our assignment σ=20; Set up single Gaussian mixture model-universal background model (initialization of mixed Gauss model).
(5), read the t two field picture, with the existing k of each pixel and this pixel (k<=K) individual Gauss model is compared, and relatively whether following formula (3) is set up:
|X i,tk|<2.5σ k (3);
(5.1) if mate, then upgrade the parameter and the weight of k mixed Gauss model.Parameter comprises expectation, variance, referring to formula (4), (5), (6):
μ t=(1-ρ)μ t-1+ρX t (4)
σ t 2=(1-ρ)σ t-1 2+ρ(X tt) T(X tt) (5)
ρ=αη(X tk,σ k) (6)
(5.2) if do not match, and k<K, increase-individual Gauss model, new Gauss model distribute and get X I, tValue be average, give bigger variance (empirical value) that we get σ=20, less weights ω=0.05 here.
(5.3) if do not match, and k=K, the minimum Gaussian Profile of weight replaced with new Gaussian Profile.The value of average and variance is the same.
(5.4), the more new formula of weights omega is (7):
ω k,t=(1-α)ω k,t-1+α(M k,t) (7)
In the following formula, ω K, tBe current weight, α is a learning rate, ω K, t-1Respective weights for previous frame; M K, tBe the coupling quantized value, if coupling: M K, t=1, if do not match: M K, t=0; Learning rate α wherein, if the α value is bigger, the ability that changes of conforming is more intense, what the background of change was very fast is dissolved in the background model, but affected by noise easily.If the α value is smaller, the ability that conforms is lower, temporary transient static object can be thought that the background that changes is dissolved into background model.
Read a new frame, repeat (5.1)~(5.4) step, set up the Gaussian Profile of background model.
(6), background model and the present image of setting up carried out the absolute value difference, extract the feature of moving target, obtaining vehicle ' s contour, tracking parameter through processing.
Earlier foreground target is carried out binary conversion treatment and image expansion, corrosion.Dilation operation is an aperture of filling up the target area, and all background dots that will contact with object merge to the process in this object.Corrosion is to remove isolated noise foreground point, eliminates a kind of process of all boundary points of object, by these two kinds of morphologic processing, obtains the more accurate profile of moving vehicle, and it is kept in the profile attributes of self-defined structure body.
By the extraction to target, we have just obtained interested target, also need clarification of objective is extracted and described.Present target signature can be divided into gray feature, textural characteristics and geometric properties.Introduce several features that adopt when native system is followed the tracks of below:
1) region area
Region area is an essential characteristic in zone, and he describes the size in zone, and the length of side of establishing square pixels is h, then the computing formula of its area S following (8):
S = Σ ( x , y ) ⋐ A h 2 - - - ( 8 )
Point (x, y) be belong in the regional A have a few.Can be by calculating the area of moving region, judge that whether moving target is disturbing factors such as non-vehicle, with crossing the width of judging moving vehicle, detects overlapping vehicle.When moving vehicle area the time, judge that then vehicle rolls the effective coverage away from less than certain threshold value.
2) regional barycenter
Regional barycenter is a kind of global description symbol, and the coordinate of regional barycenter is to calculate according to all points that belong to the zone, although always the coordinate integer of regional each point, but the coordinate Chang Buwei integer of regional barycenter.Relative with each interregional distance when very little in size of zone itself, the particle that the zone can be used for barycentric coodinates is similar to representative.
According to the center of gravity that calculates a little following (9), (10) in the All Ranges A:
x ‾ = 1 / S Σ ( x - y ) ∈ A x - - - ( 9 )
y ‾ = 1 / S Σ ( x - y ) ∈ A y - - - ( 10 )
3) length and width
Calculating it after an object extracts from a sub-picture is easier in the span of level and vertical direction, as long as know that the minimum and maximum ranks of object are number just passable, but for the object that moves at random, level is interested direction with vertical and the top of differing. can use the minimum boundary rectangle (MER) of object in this case.
Use the MER technology, the border of object rotates to 900 with about 30 increment, behind increment of each rotation, MER with a horizontal positioned comes its border of match, in order to calculate needs, only need note minimum and maximum X, the Y value of rotation back boundary point, in certain anglec of rotation, the area of MER reaches minimum value.At this moment the size of MER can be used for representing the length width of this object.The MER anglec of rotation hour has been drawn the major axes orientation of this object.This technology is particularly useful for the object of rectangular shape, can provide satisfactory result for vehicle detection.Overlapping when vehicle, when movement velocity is too fast, when the moving vehicle profile is followed the tracks of, the mathematical feature of target is mated, can effectively improve the precision of tracking.
4) bending moment not
The square in zone also can be used as feature and considers on the plane of delineation, and (x, y), if limited point on the XY plane is non-vanishing continuously and only in its segmentation, then provable its each rank square exists for the digital image function f.The square in zone is to calculate with all points that belong in the zone, thereby not too is subjected to the influence of noise etc.
(7), extract the profile, center of gravity, area, minimum boundary rectangle (MER) of object and displacement not, these several key characters leave them in the self-structure body in,
(8), repeat (5)~(7) if new moving vehicle is arranged.
Use improved mixture gaussian modelling to come the feature of each pixel in the phenogram picture frame, the feature that characterizes only adopts brightness, if there is not moving target (vehicle) to exist, then video image is static relatively, each pixel changes in time all obeys certain statistical model, and each pixel is characterized by the mixed model of K Gaussian Profile in this algorithm.When obtaining new picture frame, upgrade mixture gaussian modelling, if the pixel of present image and mixture gaussian modelling are complementary, judge that then this point is a background dot, otherwise judge that this point is the foreground point.Background model and the present image set up are carried out the absolute value difference, the absolute value difference can prevent overflowing of pixel, the white point that image occurs, the better controlled noise, obtain more accurate vehicle ' s contour through handling, parameter with needs are followed the tracks of can reach good effect in the detection of the vehicle numeration and the speed of a motor vehicle.

Claims (5)

1, the method for collecting characteristics in a kind of telecommunication flow information Video Detection, this detection system comprises video camera, signal processor, video camera inputted video image sequence be 1,2 ..., t ..., promptly in the t frame video image, handle i pixel X I, t[R I, t, G I, t, B I, t] value, the probability density function of k Gaussian Profile is formula (1):
The probability calculation formula of current pixel point i is formula (2):
P ( X it ) = Σ i = 1 k w it - Lk * η k ( X t , μ i , t - 1 , Σ i , t - Lk ) - - - ( 2 )
This method may further comprise the steps:
(1), the video image of acquisition camera, obtain R, G, B color space image sequence, with medium filtering image filtering is removed and makes an uproar;
(2), the processed images sequence is carried out the conversion of color space, R, G, B color space are changed into brightness space, S (brightness space)=(R+G+B)/3;
(3), set the parameter of mixed Gaussian algorithm, described parameter comprises global context threshold value T, learning rate α, Gaussian distribution model number K, initializes weights ω;
(4), the brightness space of reading images, with the brightness value of first each pixel of the two field picture average as mixed Gaussian, variance is set up single Gaussian mixture model-universal background model for predetermined empirical value;
(5), read the t two field picture, with the existing k of each pixel and this pixel (k<=K) individual Gauss model is compared, and relatively whether following formula (3) is set up:
|X i,tk|<2.5σ k (3);
(5.1) if coupling is upgraded the parameter and the weight of k mixed Gauss model, parameter comprises expectation, variance, referring to formula (4), (5), (6):
μ t=(1-ρ)μ t-1+ρX t (4)
σ t 2=(1-ρ)σ t-1 2+ρ(X tt) T(X tt)?(5)
ρ=αη(X tk,σ k) (6);
(5.2) if do not match, and k<K, increase the Gauss model of t two field picture, new Gauss model distributes and gets X I, tValue be average, variance, weights omega are empirical value;
(5.3) if do not match, and k=K, replacing the minimum Gaussian Profile of weight in K the Gauss model distribution with new Gaussian Profile, new Gauss model distributes and gets X I, tValue be average, variance, weights omega are empirical value;
(5.4), the more new formula of weights omega is (7):
ω k,t=(1-α)ω k,t-1+α(M k,t) (7)
In the following formula, ω K, tBe current weight, α is a learning rate, ω K, t-1Respective weights for previous frame; M K, tBe the coupling quantized value, if coupling: M K, t=1, if do not match: M K, t=0;
(6), background model and the present image of setting up carried out the absolute value difference, extract the feature of moving target, obtaining vehicle ' s contour, tracking parameter through processing.
2, the method for collecting characteristics in a kind of telecommunication flow information Video Detection as claimed in claim 1 is characterized in that: in described (6), the feature of extracting moving target comprises:
(6.1), calculate the area of moving region: the length of side of establishing square pixels is h, the area S computing formula following (8) of regional A:
S = Σ ( xy ) ∈ A h 2 - - - ( 8 )
In the following formula, point (x, y) belong in the regional A have a few;
(6.2), center, zoning: according to the center of gravity that calculates a little following (9), (10) in the All Ranges A:
x ‾ = 1 / S Σ ( x - y ) ∈ A x - - - ( 9 )
y ‾ = 1 / S Σ ( x - y ) ∈ A y - - - ( 10 )
(6.3), calculate the length and the width of moving target: the minimum boundary rectangle MER that uses object, the border of object is with the increment rotation of predetermined angle, behind increment of each rotation, rectangle MER with a horizontal positioned comes its border of match, note minimum and maximum X, the Y value of rotation back boundary point, area as MER reaches minimum value, and described MER size can be the length and the width of target.
3, the method for collecting characteristics in a kind of telecommunication flow information Video Detection as claimed in claim 2 is characterized in that: in described (6), the feature of extracting moving target also comprises:
(6.4), bending moment not: (x y), calculates with all points that belong in the zone, if limited point on X Y plane is non-vanishing continuously and only in its segmentation, judges that its each rank square exists for the digital image function f.
4, as the method for collecting characteristics in the described a kind of telecommunication flow information Video Detection of one of claim 1-3, it is characterized in that: in described (6), extract before the feature of moving target, earlier to moving image binary conversion treatment and image expansion, corrosion, dilation operation is an aperture of filling up the target area, and all background dots that will contact with object merge to the process in this object; Corrosion is to remove isolated noise foreground point, eliminates all boundary points of object; Obtain the profile of moving vehicle, it is kept in the profile attributes of self-defined structure body.
5, the method for collecting characteristics in a kind of telecommunication flow information Video Detection as claimed in claim 4, it is characterized in that: in described (3), all background threshold T gets T=0.7, the general span [0.01 of learning rate α, 0.001], the general span [3,5] of K, initializes weights ω gets ω=0.05.
CNB2005100620043A 2005-12-14 2005-12-14 Method for collecting characteristics in telecommunication flow information video detection Active CN100502463C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2005100620043A CN100502463C (en) 2005-12-14 2005-12-14 Method for collecting characteristics in telecommunication flow information video detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2005100620043A CN100502463C (en) 2005-12-14 2005-12-14 Method for collecting characteristics in telecommunication flow information video detection

Publications (2)

Publication Number Publication Date
CN1984236A true CN1984236A (en) 2007-06-20
CN100502463C CN100502463C (en) 2009-06-17

Family

ID=38166433

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2005100620043A Active CN100502463C (en) 2005-12-14 2005-12-14 Method for collecting characteristics in telecommunication flow information video detection

Country Status (1)

Country Link
CN (1) CN100502463C (en)

Cited By (58)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101282461B (en) * 2007-04-02 2010-06-02 财团法人工业技术研究院 Image processing methods
CN101437113B (en) * 2007-11-14 2010-07-28 汉王科技股份有限公司 Apparatus and method for detecting self-adapting inner core density estimation movement
CN101799968A (en) * 2010-01-13 2010-08-11 任芳 Detection method and device for oil well intrusion based on video image intelligent analysis
CN101431665B (en) * 2007-11-08 2010-09-15 财团法人工业技术研究院 Method and system for detecting and tracing object
CN101883209A (en) * 2010-05-31 2010-11-10 中山大学 Method by integrating background model and three-frame difference to detect video background
CN101882311A (en) * 2010-06-08 2010-11-10 中国科学院自动化研究所 Background modeling acceleration method based on CUDA (Compute Unified Device Architecture) technology
CN101527838B (en) * 2008-03-04 2010-12-08 华为技术有限公司 Method and system for feedback-type object detection and tracing of video object
CN101916447A (en) * 2010-07-29 2010-12-15 江苏大学 Robust motion target detecting and tracking image processing system
CN101639983B (en) * 2009-08-21 2011-02-02 任雪梅 Multilane traffic volume detection method based on image information entropy
CN101964113A (en) * 2010-10-02 2011-02-02 上海交通大学 Method for detecting moving target in illuminance abrupt variation scene
CN101980300A (en) * 2010-10-29 2011-02-23 杭州电子科技大学 3G smart phone-based motion detection method
CN102043950A (en) * 2010-12-30 2011-05-04 南京信息工程大学 Vehicle outline recognition method based on canny operator and marginal point statistic
CN102081802A (en) * 2011-01-26 2011-06-01 北京中星微电子有限公司 Method and device for detecting color card based on block matching
CN101303732B (en) * 2008-04-11 2011-06-22 西安交通大学 Method for apperceiving and alarming movable target based on vehicle-mounted monocular camera
CN101448151B (en) * 2007-11-28 2011-08-17 汉王科技股份有限公司 Motion detecting device for estimating self-adapting inner core density and method therefor
CN102236968A (en) * 2010-05-05 2011-11-09 刘嘉 Remote intelligent monitoring system for transport vehicle
CN102385705A (en) * 2010-09-02 2012-03-21 大猩猩科技股份有限公司 Abnormal behavior detection system and method by utilizing automatic multi-feature clustering method
CN101909145B (en) * 2009-06-05 2012-03-28 鸿富锦精密工业(深圳)有限公司 Image noise filtering system and method
CN101635026B (en) * 2008-07-23 2012-05-23 中国科学院自动化研究所 Method for detecting derelict without tracking process
CN102521580A (en) * 2011-12-21 2012-06-27 华平信息技术(南昌)有限公司 Real-time target matching tracking method and system
CN102693637A (en) * 2012-06-12 2012-09-26 北京联合大学 Signal lamp for prompting right-turn vehicle to avoid pedestrians at crossroad
CN101872279B (en) * 2009-04-23 2012-11-21 深圳富泰宏精密工业有限公司 Electronic device and method for adjusting position of display image thereof
CN102799857A (en) * 2012-06-19 2012-11-28 东南大学 Video multi-vehicle outline detection method
CN102867193A (en) * 2012-09-14 2013-01-09 成都国科海博计算机系统有限公司 Biological detection method and device and biological detector
CN103150738A (en) * 2013-02-02 2013-06-12 南京理工大学 Detection method of moving objects of distributed multisensor
CN101540103B (en) * 2008-03-17 2013-06-19 上海宝康电子控制工程有限公司 Method and system for traffic information acquisition and event processing
CN103272783A (en) * 2013-06-21 2013-09-04 核工业理化工程研究院华核新技术开发公司 Color determination and separation method for color CCD color sorting machine
RU2506640C2 (en) * 2012-03-12 2014-02-10 Государственное казенное образовательное учреждение высшего профессионального образования Академия Федеральной службы охраны Российской Федерации (Академия ФСО России) Method of identifying insert frames in multimedia data stream
CN103578121A (en) * 2013-11-22 2014-02-12 南京信大气象装备有限公司 Motion detection method based on shared Gaussian model in disturbed motion environment
CN103646544A (en) * 2013-11-15 2014-03-19 天津天地伟业数码科技有限公司 Vehicle-behavior analysis and identification method based on holder and camera device
US8693731B2 (en) 2012-01-17 2014-04-08 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging
CN104036288A (en) * 2014-05-30 2014-09-10 宁波海视智能系统有限公司 Vehicle type classification method based on videos
CN104267209A (en) * 2014-10-24 2015-01-07 浙江力石科技股份有限公司 Method and system for expressway video speed measurement based on virtual coils
CN104950285A (en) * 2015-06-02 2015-09-30 西安理工大学 RFID (radio frequency identification) indoor positioning method based on signal difference value change of neighboring tags
US9153028B2 (en) 2012-01-17 2015-10-06 Leap Motion, Inc. Systems and methods for capturing motion in three-dimensional space
US9285893B2 (en) 2012-11-08 2016-03-15 Leap Motion, Inc. Object detection and tracking with variable-field illumination devices
CN105472204A (en) * 2014-09-05 2016-04-06 南京理工大学 Inter-frame noise reduction method based on motion detection
US9465461B2 (en) 2013-01-08 2016-10-11 Leap Motion, Inc. Object detection and tracking with audio and optical signals
CN106412501A (en) * 2016-09-20 2017-02-15 华中科技大学 Construction safety behavior intelligent monitoring system based on video and monitoring method thereof
US9613262B2 (en) 2014-01-15 2017-04-04 Leap Motion, Inc. Object detection and tracking for providing a virtual device experience
US9679215B2 (en) 2012-01-17 2017-06-13 Leap Motion, Inc. Systems and methods for machine control
US9916009B2 (en) 2013-04-26 2018-03-13 Leap Motion, Inc. Non-tactile interface systems and methods
US9945660B2 (en) 2012-01-17 2018-04-17 Leap Motion, Inc. Systems and methods of locating a control object appendage in three dimensional (3D) space
CN108694833A (en) * 2018-07-17 2018-10-23 重庆交通大学 Traffic abnormal incident detecting system based on binary sensor
CN109035205A (en) * 2018-06-27 2018-12-18 清华大学苏州汽车研究院(吴江) Water hyacinth contamination detection method based on video analysis
CN109146914A (en) * 2018-06-20 2019-01-04 上海市政工程设计研究总院(集团)有限公司 A kind of drink-driving behavior method for early warning of the highway based on video analysis
US10585193B2 (en) 2013-03-15 2020-03-10 Ultrahaptics IP Two Limited Determining positional information of an object in space
US10609285B2 (en) 2013-01-07 2020-03-31 Ultrahaptics IP Two Limited Power consumption in motion-capture systems
US10691219B2 (en) 2012-01-17 2020-06-23 Ultrahaptics IP Two Limited Systems and methods for machine control
US10739862B2 (en) 2013-01-15 2020-08-11 Ultrahaptics IP Two Limited Free-space user interface and control using virtual constructs
US10846942B1 (en) 2013-08-29 2020-11-24 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication
CN113168704A (en) * 2019-02-22 2021-07-23 轨迹人有限责任公司 System and method for driving travel path features in a driving range
CN113286194A (en) * 2020-02-20 2021-08-20 北京三星通信技术研究有限公司 Video processing method and device, electronic equipment and readable storage medium
US11720180B2 (en) 2012-01-17 2023-08-08 Ultrahaptics IP Two Limited Systems and methods for machine control
US11740705B2 (en) 2013-01-15 2023-08-29 Ultrahaptics IP Two Limited Method and system for controlling a machine according to a characteristic of a control object
US11778159B2 (en) 2014-08-08 2023-10-03 Ultrahaptics IP Two Limited Augmented reality with motion sensing
US11775033B2 (en) 2013-10-03 2023-10-03 Ultrahaptics IP Two Limited Enhanced field of view to augment three-dimensional (3D) sensory space for free-space gesture interpretation
US11868687B2 (en) 2013-10-31 2024-01-09 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102955940B (en) * 2012-11-28 2015-12-23 山东电力集团公司济宁供电公司 A kind of transmission line of electricity object detecting system and method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6519561B1 (en) * 1997-11-03 2003-02-11 T-Netix, Inc. Model adaptation of neural tree networks and other fused models for speaker verification
JP4336865B2 (en) * 2001-03-13 2009-09-30 日本電気株式会社 Voice recognition device
CN100367294C (en) * 2005-06-23 2008-02-06 复旦大学 Method for dividing human body skin area from color digital images and video graphs

Cited By (100)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101282461B (en) * 2007-04-02 2010-06-02 财团法人工业技术研究院 Image processing methods
CN101431665B (en) * 2007-11-08 2010-09-15 财团法人工业技术研究院 Method and system for detecting and tracing object
CN101437113B (en) * 2007-11-14 2010-07-28 汉王科技股份有限公司 Apparatus and method for detecting self-adapting inner core density estimation movement
CN101448151B (en) * 2007-11-28 2011-08-17 汉王科技股份有限公司 Motion detecting device for estimating self-adapting inner core density and method therefor
CN101527838B (en) * 2008-03-04 2010-12-08 华为技术有限公司 Method and system for feedback-type object detection and tracing of video object
CN101540103B (en) * 2008-03-17 2013-06-19 上海宝康电子控制工程有限公司 Method and system for traffic information acquisition and event processing
CN101303732B (en) * 2008-04-11 2011-06-22 西安交通大学 Method for apperceiving and alarming movable target based on vehicle-mounted monocular camera
CN101635026B (en) * 2008-07-23 2012-05-23 中国科学院自动化研究所 Method for detecting derelict without tracking process
CN101872279B (en) * 2009-04-23 2012-11-21 深圳富泰宏精密工业有限公司 Electronic device and method for adjusting position of display image thereof
CN101909145B (en) * 2009-06-05 2012-03-28 鸿富锦精密工业(深圳)有限公司 Image noise filtering system and method
CN101639983B (en) * 2009-08-21 2011-02-02 任雪梅 Multilane traffic volume detection method based on image information entropy
CN101799968A (en) * 2010-01-13 2010-08-11 任芳 Detection method and device for oil well intrusion based on video image intelligent analysis
CN102236968B (en) * 2010-05-05 2015-08-19 刘嘉 Intelligent remote monitoring system for transport vehicle
CN102236968A (en) * 2010-05-05 2011-11-09 刘嘉 Remote intelligent monitoring system for transport vehicle
CN101883209B (en) * 2010-05-31 2012-09-12 中山大学 Method for integrating background model and three-frame difference to detect video background
CN101883209A (en) * 2010-05-31 2010-11-10 中山大学 Method by integrating background model and three-frame difference to detect video background
CN101882311A (en) * 2010-06-08 2010-11-10 中国科学院自动化研究所 Background modeling acceleration method based on CUDA (Compute Unified Device Architecture) technology
CN101916447B (en) * 2010-07-29 2012-08-15 江苏大学 Robust motion target detecting and tracking image processing system
CN101916447A (en) * 2010-07-29 2010-12-15 江苏大学 Robust motion target detecting and tracking image processing system
CN102385705A (en) * 2010-09-02 2012-03-21 大猩猩科技股份有限公司 Abnormal behavior detection system and method by utilizing automatic multi-feature clustering method
CN102385705B (en) * 2010-09-02 2013-09-18 大猩猩科技股份有限公司 Abnormal behavior detection system and method by utilizing automatic multi-feature clustering method
CN101964113A (en) * 2010-10-02 2011-02-02 上海交通大学 Method for detecting moving target in illuminance abrupt variation scene
CN101980300A (en) * 2010-10-29 2011-02-23 杭州电子科技大学 3G smart phone-based motion detection method
CN102043950A (en) * 2010-12-30 2011-05-04 南京信息工程大学 Vehicle outline recognition method based on canny operator and marginal point statistic
CN102043950B (en) * 2010-12-30 2012-11-28 南京信息工程大学 Vehicle outline recognition method based on canny operator and marginal point statistic
CN102081802A (en) * 2011-01-26 2011-06-01 北京中星微电子有限公司 Method and device for detecting color card based on block matching
CN102521580A (en) * 2011-12-21 2012-06-27 华平信息技术(南昌)有限公司 Real-time target matching tracking method and system
US11782516B2 (en) 2012-01-17 2023-10-10 Ultrahaptics IP Two Limited Differentiating a detected object from a background using a gaussian brightness falloff pattern
US10691219B2 (en) 2012-01-17 2020-06-23 Ultrahaptics IP Two Limited Systems and methods for machine control
US11720180B2 (en) 2012-01-17 2023-08-08 Ultrahaptics IP Two Limited Systems and methods for machine control
US9697643B2 (en) 2012-01-17 2017-07-04 Leap Motion, Inc. Systems and methods of object shape and position determination in three-dimensional (3D) space
US9495613B2 (en) 2012-01-17 2016-11-15 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging using formed difference images
US11308711B2 (en) 2012-01-17 2022-04-19 Ultrahaptics IP Two Limited Enhanced contrast for object detection and characterization by optical imaging based on differences between images
US10767982B2 (en) 2012-01-17 2020-09-08 Ultrahaptics IP Two Limited Systems and methods of locating a control object appendage in three dimensional (3D) space
US8693731B2 (en) 2012-01-17 2014-04-08 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging
US9741136B2 (en) 2012-01-17 2017-08-22 Leap Motion, Inc. Systems and methods of object shape and position determination in three-dimensional (3D) space
US10699155B2 (en) 2012-01-17 2020-06-30 Ultrahaptics IP Two Limited Enhanced contrast for object detection and characterization by optical imaging based on differences between images
US9679215B2 (en) 2012-01-17 2017-06-13 Leap Motion, Inc. Systems and methods for machine control
US9934580B2 (en) 2012-01-17 2018-04-03 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging based on differences between images
US9672441B2 (en) 2012-01-17 2017-06-06 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging based on differences between images
US9652668B2 (en) 2012-01-17 2017-05-16 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging based on differences between images
US9626591B2 (en) 2012-01-17 2017-04-18 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging
US9153028B2 (en) 2012-01-17 2015-10-06 Leap Motion, Inc. Systems and methods for capturing motion in three-dimensional space
US10565784B2 (en) 2012-01-17 2020-02-18 Ultrahaptics IP Two Limited Systems and methods for authenticating a user according to a hand of the user moving in a three-dimensional (3D) space
US10410411B2 (en) 2012-01-17 2019-09-10 Leap Motion, Inc. Systems and methods of object shape and position determination in three-dimensional (3D) space
US9767345B2 (en) 2012-01-17 2017-09-19 Leap Motion, Inc. Systems and methods of constructing three-dimensional (3D) model of an object using image cross-sections
US10366308B2 (en) 2012-01-17 2019-07-30 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging based on differences between images
US9945660B2 (en) 2012-01-17 2018-04-17 Leap Motion, Inc. Systems and methods of locating a control object appendage in three dimensional (3D) space
US9436998B2 (en) 2012-01-17 2016-09-06 Leap Motion, Inc. Systems and methods of constructing three-dimensional (3D) model of an object using image cross-sections
US9778752B2 (en) 2012-01-17 2017-10-03 Leap Motion, Inc. Systems and methods for machine control
RU2506640C2 (en) * 2012-03-12 2014-02-10 Государственное казенное образовательное учреждение высшего профессионального образования Академия Федеральной службы охраны Российской Федерации (Академия ФСО России) Method of identifying insert frames in multimedia data stream
CN102693637B (en) * 2012-06-12 2014-09-03 北京联合大学 Signal lamp for prompting right-turn vehicle to avoid pedestrians at crossroad
CN102693637A (en) * 2012-06-12 2012-09-26 北京联合大学 Signal lamp for prompting right-turn vehicle to avoid pedestrians at crossroad
CN102799857A (en) * 2012-06-19 2012-11-28 东南大学 Video multi-vehicle outline detection method
CN102799857B (en) * 2012-06-19 2014-12-17 东南大学 Video multi-vehicle outline detection method
CN102867193B (en) * 2012-09-14 2015-06-17 成都国科海博信息技术股份有限公司 Biological detection method and device and biological detector
CN102867193A (en) * 2012-09-14 2013-01-09 成都国科海博计算机系统有限公司 Biological detection method and device and biological detector
US9285893B2 (en) 2012-11-08 2016-03-15 Leap Motion, Inc. Object detection and tracking with variable-field illumination devices
US10609285B2 (en) 2013-01-07 2020-03-31 Ultrahaptics IP Two Limited Power consumption in motion-capture systems
US9465461B2 (en) 2013-01-08 2016-10-11 Leap Motion, Inc. Object detection and tracking with audio and optical signals
US10097754B2 (en) 2013-01-08 2018-10-09 Leap Motion, Inc. Power consumption in motion-capture systems with audio and optical signals
US11740705B2 (en) 2013-01-15 2023-08-29 Ultrahaptics IP Two Limited Method and system for controlling a machine according to a characteristic of a control object
US11353962B2 (en) 2013-01-15 2022-06-07 Ultrahaptics IP Two Limited Free-space user interface and control using virtual constructs
US10739862B2 (en) 2013-01-15 2020-08-11 Ultrahaptics IP Two Limited Free-space user interface and control using virtual constructs
US11874970B2 (en) 2013-01-15 2024-01-16 Ultrahaptics IP Two Limited Free-space user interface and control using virtual constructs
CN103150738A (en) * 2013-02-02 2013-06-12 南京理工大学 Detection method of moving objects of distributed multisensor
US11693115B2 (en) 2013-03-15 2023-07-04 Ultrahaptics IP Two Limited Determining positional information of an object in space
US10585193B2 (en) 2013-03-15 2020-03-10 Ultrahaptics IP Two Limited Determining positional information of an object in space
US9916009B2 (en) 2013-04-26 2018-03-13 Leap Motion, Inc. Non-tactile interface systems and methods
US11099653B2 (en) 2013-04-26 2021-08-24 Ultrahaptics IP Two Limited Machine responsiveness to dynamic user movements and gestures
US10452151B2 (en) 2013-04-26 2019-10-22 Ultrahaptics IP Two Limited Non-tactile interface systems and methods
CN103272783A (en) * 2013-06-21 2013-09-04 核工业理化工程研究院华核新技术开发公司 Color determination and separation method for color CCD color sorting machine
CN103272783B (en) * 2013-06-21 2015-11-04 核工业理化工程研究院华核新技术开发公司 Colored CCD color selector color judges and separation method
US11776208B2 (en) 2013-08-29 2023-10-03 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication
US11461966B1 (en) 2013-08-29 2022-10-04 Ultrahaptics IP Two Limited Determining spans and span lengths of a control object in a free space gesture control environment
US11282273B2 (en) 2013-08-29 2022-03-22 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication
US10846942B1 (en) 2013-08-29 2020-11-24 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication
US11775033B2 (en) 2013-10-03 2023-10-03 Ultrahaptics IP Two Limited Enhanced field of view to augment three-dimensional (3D) sensory space for free-space gesture interpretation
US11868687B2 (en) 2013-10-31 2024-01-09 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication
CN103646544A (en) * 2013-11-15 2014-03-19 天津天地伟业数码科技有限公司 Vehicle-behavior analysis and identification method based on holder and camera device
CN103646544B (en) * 2013-11-15 2016-03-09 天津天地伟业数码科技有限公司 Based on the vehicle behavioural analysis recognition methods of The Cloud Terrace and camera apparatus
CN103578121B (en) * 2013-11-22 2016-08-17 南京信大气象装备有限公司 Method for testing motion based on shared Gauss model under disturbed motion environment
CN103578121A (en) * 2013-11-22 2014-02-12 南京信大气象装备有限公司 Motion detection method based on shared Gaussian model in disturbed motion environment
US9613262B2 (en) 2014-01-15 2017-04-04 Leap Motion, Inc. Object detection and tracking for providing a virtual device experience
CN104036288A (en) * 2014-05-30 2014-09-10 宁波海视智能系统有限公司 Vehicle type classification method based on videos
US11778159B2 (en) 2014-08-08 2023-10-03 Ultrahaptics IP Two Limited Augmented reality with motion sensing
CN105472204A (en) * 2014-09-05 2016-04-06 南京理工大学 Inter-frame noise reduction method based on motion detection
CN105472204B (en) * 2014-09-05 2018-12-14 南京理工大学 Noise reducing method based on motion detection
CN104267209A (en) * 2014-10-24 2015-01-07 浙江力石科技股份有限公司 Method and system for expressway video speed measurement based on virtual coils
CN104267209B (en) * 2014-10-24 2017-01-11 浙江力石科技股份有限公司 Method and system for expressway video speed measurement based on virtual coils
CN104950285A (en) * 2015-06-02 2015-09-30 西安理工大学 RFID (radio frequency identification) indoor positioning method based on signal difference value change of neighboring tags
CN104950285B (en) * 2015-06-02 2017-08-25 西安理工大学 A kind of RFID indoor orientation methods changed based on neighbour's label signal difference
CN106412501B (en) * 2016-09-20 2019-07-23 华中科技大学 A kind of the construction safety behavior intelligent monitor system and its monitoring method of video
CN106412501A (en) * 2016-09-20 2017-02-15 华中科技大学 Construction safety behavior intelligent monitoring system based on video and monitoring method thereof
CN109146914A (en) * 2018-06-20 2019-01-04 上海市政工程设计研究总院(集团)有限公司 A kind of drink-driving behavior method for early warning of the highway based on video analysis
CN109146914B (en) * 2018-06-20 2023-05-30 上海市政工程设计研究总院(集团)有限公司 Drunk driving behavior early warning method for expressway based on video analysis
CN109035205A (en) * 2018-06-27 2018-12-18 清华大学苏州汽车研究院(吴江) Water hyacinth contamination detection method based on video analysis
CN108694833A (en) * 2018-07-17 2018-10-23 重庆交通大学 Traffic abnormal incident detecting system based on binary sensor
CN113168704A (en) * 2019-02-22 2021-07-23 轨迹人有限责任公司 System and method for driving travel path features in a driving range
CN113286194A (en) * 2020-02-20 2021-08-20 北京三星通信技术研究有限公司 Video processing method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN100502463C (en) 2009-06-17

Similar Documents

Publication Publication Date Title
CN100502463C (en) Method for collecting characteristics in telecommunication flow information video detection
CN108983219B (en) Fusion method and system for image information and radar information of traffic scene
CN102768804B (en) Video-based traffic information acquisition method
Zheng et al. A novel vehicle detection method with high resolution highway aerial image
CN110210451B (en) Zebra crossing detection method
Liu et al. A survey of vision-based vehicle detection and tracking techniques in ITS
CN110008932A (en) A kind of vehicle violation crimping detection method based on computer vision
CN105574488A (en) Low-altitude aerial infrared image based pedestrian detection method
CN104239867A (en) License plate locating method and system
CN105069441A (en) Moving vehicle detection method based on background updating and particle swarm optimization algorithm
CN105574895A (en) Congestion detection method during the dynamic driving process of vehicle
CN104574381A (en) Full reference image quality evaluation method based on LBP (local binary pattern)
CN106778540A (en) Parking detection is accurately based on the parking event detecting method of background double layer
CN103794050A (en) Real-time transport vehicle detecting and tracking method
CN104537649A (en) Vehicle steering judgment method and system based on image ambiguity comparison
Liu et al. Effective road lane detection and tracking method using line segment detector
CN113516853B (en) Multi-lane traffic flow detection method for complex monitoring scene
CN111652033A (en) Lane line detection method based on OpenCV
CN114530042A (en) Urban traffic brain monitoring system based on internet of things technology
Kanhere et al. Real-time detection and tracking of vehicle base fronts for measuring traffic counts and speeds on highways
Meshram et al. Vehicle detection and tracking techniques used in moving vehicles
Płaczek A real time vehicle detection algorithm for vision-based sensors
Lagorio et al. Automatic detection of adverse weather conditions in traffic scenes
CN116110230A (en) Vehicle lane crossing line identification method and system based on vehicle-mounted camera
CN112215109A (en) Vehicle detection method and system based on scene analysis

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant