CN103198300B - Parking event detection method based on double layers of backgrounds - Google Patents

Parking event detection method based on double layers of backgrounds Download PDF

Info

Publication number
CN103198300B
CN103198300B CN201310104633.2A CN201310104633A CN103198300B CN 103198300 B CN103198300 B CN 103198300B CN 201310104633 A CN201310104633 A CN 201310104633A CN 103198300 B CN103198300 B CN 103198300B
Authority
CN
China
Prior art keywords
background
model
target
pixel
parking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310104633.2A
Other languages
Chinese (zh)
Other versions
CN103198300A (en
Inventor
谢正光
李宏魁
胡建平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong University Technology Transfer Center Co ltd
Original Assignee
Nantong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong University filed Critical Nantong University
Priority to CN201611069411.1A priority Critical patent/CN106778540B/en
Priority to CN201310104633.2A priority patent/CN103198300B/en
Publication of CN103198300A publication Critical patent/CN103198300A/en
Application granted granted Critical
Publication of CN103198300B publication Critical patent/CN103198300B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a parking event detection method based on double layers of backgrounds. The parking event detection method based on the double layers of backgrounds mainly comprises the steps of double-layer background modeling, secondary background replacement, parking detection, and state table updating. According to the parking event detection method based on the double-layer background, an absolute value of the pixel difference of corresponding points of a main background and the second background is used for judging whether a stopping object appears, detection is carried out according to the outline of the stopping object, if the stopping object is a vehicle, a parking state is calibrated and updated, and in addition, the background is replaced by judging whether the foreground in a background model is empty or not. The parking event detection method based on the double layers of backgrounds can well detect a parking event in real time, and is simple and capable of reducing influence of the environment on a detection result by means of the double layers of backgrounds and accurately recording a parking position and the parking state due to the establishment of a parking event state table. The parking event detection method based on the double layers of backgrounds is capable of being used in different occasions such as an expressway, a parking lot, and an urban road due to the setting of detection parameters and interested areas, high in accuracy and good in real-time performance.

Description

Parking event detecting method based on background double layer
Technical field
The present invention relates to field of video detection is and in particular to a kind of parking event detecting method based on background double layer.
Background technology
Fast development with national economy and the swift and violent increase of motor vehicles, China's traffic problems are increasingly serious.Motor-driven The traffic jam that the traffic events such as the landslide that car is parked, shed thing, vehicle accident stagnation of movement and adverse circumstances cause cause It is continuously increased with second accident, this accident has sudden and occasionality, once will result in great personnel and property damage Lose.Conventional Parking detection, mainly passes through manual monitoring, the method such as telecommunication flow information collection, greatly consumes traffic Human and material resources in control and financial resources.So, set up automatic stopping event detection system based on video particularly important and must Will.
Have been developed for multiple parking event detecting methods based on video in the world at present, mainly have based on virtual box picture Element, gray-scale statistical, the parking event detecting method based on target velocity with based on the segmentation of single background block.
Motion pixel and static pixel are obtained by Background difference based on the method for virtual box pixel, gray-scale statistical, and root Judge whether vehicle stops according to the situation of change of pixel in defined area or gray scale;Although the method algorithm is simple, easily it is subject to To interference such as extraneous illumination conditions, and need to manually set virtual detection region, detection practicality is poor.Stopping based on target velocity Car event detecting method needs to carry out real-time tracking to vehicle, and scene is demarcated, and needs to calculate in real time target Movement velocity;This algorithm is complex and rate of false alarm is higher.Block segmentation parking detection method based on single background passes through preservation three The different background of width, compares two-by-two and determines whether suspicious piece, suspicious piece is carried out with detection and judges Parking;It is right that the method needs Image carries out section technique, is not suitable for road conditions complex ground, and preserves three width background images from same background model, Detection degree of accuracy is not high.Said method for vehicle lay-off or sails out of shortage effectively analysis, and algorithm is analyzed to Parking Effectiveness and practicality can not meet actual requirement.
Content of the invention
It is an object of the invention to provide a kind of affected by environment less, detection accurately stopping based on background double layer of stopping Car event detecting method.
The technical solution of the present invention is:
A kind of parking event detecting method based on background double layer, is characterized in that:Comprise the following steps:By following steps Realize:
(1) set up the main background of the different background of two-layer and time background, using the renewal speed of main background fast, to stopping The more sensitive characteristic of target, and relatively slow, slower to the stopping target response characteristic of secondary context update, compare the difference of two width figures;
(2)To background double layer, corresponding pixel is made difference and is obtained absolute value, obtains static target, poor to this background double layer Value image carries out binaryzation;By HSI cast shadow suppressing, and eliminate the shade of target respective pixel point, this bianry image is carried out Closed operation, eliminates discontinuous cavity;
(3)According to focal length of camera, height and angle, target pixel points in image are weighted, the target pixel points of distant place Weights increase, and target pixel points weights nearby reduce, and obtain the object pixel value preset after weighting, when a threshold is reached, stop Event counter S adds 1, when S is more than threshold value, then preserves this bianry image;
(4)This bianry image is filtered, and the target in image is split and contour detecting, setting detection spirit Sensitivity, the target being more than the threshold of sensitivity to pixel value draws rectangle frame, judges whether this target is vehicle according to length-width ratio, when When target is vehicle, records the diagonal intersecting point coordinate of this rectangle frame, be stored in Parking state table;
(5)This pixel is processed and is analyzed, judged whether this target is vehicle;If this target is judged as car , triggering Parking is reported to the police and main background present frame is assigned to time background, proceeds comparison in difference;When not having in secondary background During target, retain this two field picture as pure background;
(6)When the prospect in secondary background model is space-time, store current background image as pure background, when detecting When having stopping target, main background present frame is stored and the background image pure with this is compared, if difference in two field pictures Step is skipped to during less than threshold value(2)Continue to run with, if difference is more than threshold value in two field pictures, with this main background present frame Replace time background, skip to step(2)Proceed, when Parking is detected again, comparison object center point coordinate, judge mesh Whether mark sails out of, and updates state table.
The parking event detecting method based on background double layer of the present invention is counted with based on virtual coil, based on target following Parking detection method with based on single background block segmentation parking detection method compare, affected by environment less, need not be to image Set and image calibration, algorithm is simpler feasible, reduced the computational complexity of detection of stopping, improved Parking inspection The real-time surveyed.Processed by HIS cast shadow suppressing and morphologic filtering and eliminate interference so that detection of stopping is more accurate;Build Vertical state table updates, accurately record parking spot and state.
Brief description
The invention will be further described with reference to the accompanying drawings and examples.
Fig. 1 is the parking detection method flow chart based on background double layer.
Specific embodiment
Provide the consistent parking event detecting method based on background double layer in conjunction with the accompanying drawings and embodiments, the method is passed through to set up Secondary background mixture Gaussian background model and main background RunningAvg background, seek difference to two-layer background, and to difference Value image carries out binaryzation, obtains static target or low-speed motion target.By knowing to the statistics of this binary image and profile Do not detect whether stopped vehicle or legacy, RunningAvg background this moment has been contrasted with pure background image, sentences Whether disconnected road is unimpeded, updates dead ship condition table and time background, carries out Parking detection next time.Specific implementation step is such as Under:
Step one, by the video image I of inputnCarry out gray processing process and obtain gray level image Ingray, by gray level image IngraySet up main background and time background;
The foundation of main background, secondary background and pure background and renewal:
Main background model set up RunningAvg model, RunningAvg model is shown below:
B Avg ( i , j ) = ∂ avg B n - 1 ( i , j ) + ( 1 - ∂ avg ) I n ( i , j )
Bavg (i, j) is RunningAvg background model;
——Bn(i, j) is the background value after n-th frame renewal,
——Bn-1(i, j) is the background value of the (n-1)th frame,
——In(i, j) is the gray value of current video frame,
——For renewal rate.
The present invention coupleCarry out following improvement: ∂ avg = ∂ avg 1 M n + ∂ avg 2 ( 1 - M n )
—— M n = 0 D n ( i , j ) < T 1 D n ( i , j ) &GreaterEqual; T , MnFor mode bit;
Dn(i,j)=|In(i,j)-In-1(i, j) |, Dn (i, j) is consecutive frame residual image;
It is respectively variable weighting parameter;
The foundation of secondary background model initializes predefined several Gauss model first, to the ginseng in Gauss model Number is initialized, and the parameter that will use after obtaining.Secondly, each of each frame pixel is processed, See whether it mates certain model, if coupling, be classified in this model, and this model is carried out more according to new pixel value Newly, if mismatching, a Gauss model, initiation parameter being set up with this pixel, acting on behalf of mould most unlikely in original model Type.Above several most possible models are finally selected as background model, to be that target context extraction is laid the groundwork.
Mixed Gauss model p (xN) it is pixel probability of occurrence statistical value, it is shown below:
p ( x N ) = &Sigma; j = 1 K w j &eta; ( x N ; &theta; j )
——wjFor the weight of k rank Gaussian Background, xNFor input sample, θjFor observed value;
——η(x;θk) for k rank Gauss standard normal distribution, its expression formula is as follows:
&eta; ( x ; &theta; k ) = &eta; ( x ; &mu; k , c ) = 1 ( 2 &pi; ) D 2 | &Sigma; K | 1 2 e - 1 2 ( x - &mu; k ) T &Sigma; k - 1 ( x - &mu; k )
——μkFor average;
——∑k2I is variance;
Background pixel judgment formula such as following formula:
B Gauss ( i , j ) = arg b min ( &Sigma; j = b w j > T )
w ^ k N + 1 = ( 1 - a ) w ^ k N + a p ^ ( w k | x N + 1 )
——BGaussFor background pixel point;
A is learning rate;
——wkFor initializing weight,For wkExpected value;
T is background threshold;
Mixed Gauss model carrys out the feature of each pixel in phenogram picture using K Gauss model, in a new two field picture Update mixed Gauss model after acquisition, mated with mixed Gauss model with each pixel in present image, if success, Judge this point as background dot, otherwise for foreground point.Take an overall view of whole Gauss model, mainly have variance and two parameters of average to determine Fixed, the study to average and variance, take different study mechanisms, stability, accuracy and the receipts of model will be directly influenced Holding back property.Because we are the background extracting modelings to moving target it is therefore desirable to join to variance in Gauss model and average two Number real-time update.For improving the learning capacity of model, improved method adopts different learning rates to the renewal of average and variance;For Improve under busy scene, the Detection results of big and slow moving target, introduce the concept of weights average, set up background image And real-time update, then in conjunction with weights, weights average and background image, the classification of foreground and background is carried out to pixel.
So-called pure background, when the prospect value of secondary background mixture Gaussian background model is space-time, preserves current figure As BG_clearAs pure background.
Step 2, initializes dead ship condition table, and labelling stops whether having stopped vehicle in scene.
Step 3, the value of background double layer corresponding pixel points is made difference and is obtained absolute value, obtains static target or low-speed motion Target image D (i, j)=| BAvg(i,j)-BGauss(i, j) |, binaryzation, binary-state threshold are carried out to this background double layer error image Th.By HSI cast shadow suppressing, and eliminate the shade of target respective pixel point, this image is carried out with closed operation, eliminates discontinuous Cavity, finally obtains image Ddet(i,j);
Step 4, the focal length according to video camera and angle are to image Ddet(i, j) white pixel point is weighted, this example Substantially image is divided into 5 partly pixel to be weighted.By as far as near, weights respectively 2.0,1.6,1.2,1.1 and 1. Statistics is carried out when more than threshold value T to the pixel after weightingpWhen, this example Tp=220, enumerator S adds 1, otherwise S=0;When S be more than etc. When 90, then by this binary image DobjPreserve;
Step 5, to image DobjIt is filtered, and the target in image is split and contour detecting, setting detection Sensitivity Tp, T is more than or equal to pixel valuepTarget, determine upper left corner edge and the bottom right arm of angle of target by contour detecting Edge, is marked with square frame, removes non-vehicle target by the length-width ratio of computing block diagram and the number of inframe white pixel,
Profile upper left angular coordinate is (x1,y1), bottom right angular coordinate is (x2,y2), then a length of l=max (| x1-x2|,|y1-y2 |), a width of w=min (| x1-x2|,|y1-y2|).WhenDuring establishment, then this target is vehicle, wherein t1,t2According to vehicle Length-width ratio value.When target is for vehicle, triggering Parking is reported to the police and is recorded the diagonal intersecting point coordinate of this rectangle frame, deposits Enter Parking state table;
Step 6, square frame diagonal intersecting point coordinate is (xi1,yi1) wherein And store coordinate Information, when stagnation of movement target is detected again for vehicle, obtains next group vehicle limit block diagonal intersecting point coordinate (xj1, yj1), when max (| xi1-xj1|,|yi1-yj1|)≤Tc then thinks has vehicle to sail out of, and eliminates the vehicle coordinate of storage in state table, Update and stop vehicle fleet size, that is, vehicle existence value subtracts 1.When max (| xi1-xj1|,|yi1-yj1|)>TcWhen then it is assumed that there being vehicle to sail Enter, increase vehicle coordinate in state table, update and stop vehicle fleet size, that is, vehicle existence value adds 1, wherein TcFor side-play amount threshold value, AndW is overall width;
By current RunnineAvg background storage and with pure Gaussian Background image BG_clearIt is compared, if two frames In image, difference is less than threshold value Td, skip to step 3 and continue executing with, if difference is more than or equal to threshold value T in two field picturesd, then with being somebody's turn to do RunnineAvg background replaces Gaussian Background, and skips to step 3 and continue executing with;
Main context parameter setting, renewal speed parameter
Secondary context parameter setting, Gauss distribution weight sum threshold value T=0.7, background threshold T=2.5, learning rateInitial weight wk=0.05, primary standard difference ∑k=30;
In step 3, threshold value Th=25;In step 5, detection sensitivity Tp=200.

Claims (1)

1. a kind of parking event detecting method based on background double layer, is characterized in that:Comprise the following steps:
(1) set up the main background of the different background of two-layer and time background, using the renewal speed of main background fast, to stopping target More sensitive characteristic, and relatively slow, slower to the stopping target response characteristic of secondary context update, compare the difference of two width figures;
Main background model set up RunningAvg model, RunningAvg model is shown below:
B A v g ( i , j ) = &part; a v g B n - 1 ( i , j ) + ( 1 - &part; a v g ) I n ( i , j )
——BAvg(i, j) is RunningAvg background model;
——Bn(i, j) is the background value after n-th frame renewal,
——Bn-1(i, j) is the background value of the (n-1)th frame,
——In(i, j) is the gray value of current video frame,
——For renewal rate;
RightCarry out following improvement:
—— M n = { 0 D n ( i , j ) < T 1 D n ( i , j ) &GreaterEqual; T , MnFor mode bit;
Dn(i, j)=| In(i,j)-In-1(i, j) |, Dn (i, j) is consecutive frame residual image;
It is respectively variable weighting parameter;
The foundation of secondary background model initializes predefined several Gauss model first, and the parameter in Gauss model is entered Row initialization, and the parameter that will use after obtaining;Secondly, each of each frame pixel is processed, sees it Whether mate certain model, if coupling, be classified in this model, and this model is updated according to new pixel value, If mismatching, a Gauss model, initiation parameter being set up with this pixel, acting on behalf of model most unlikely in original model; Above several most possible models are finally selected as background model, to be that target context extraction is laid the groundwork;
Mixed Gauss model p (xN) it is pixel probability of occurrence statistical value, it is shown below:
p ( x N ) = &Sigma; j = 1 K w j &eta; ( x N ; &theta; j )
——wjFor the weight of k rank Gaussian Background, xNFor input sample, θjFor observed value;
——η(x;θk) for k rank Gauss standard normal distribution, its expression formula is as follows:
&eta; ( x ; &theta; k ) = &eta; ( x ; &mu; k , c ) = 1 ( 2 &pi; ) D 2 | &Sigma; K | 1 2 e - 1 2 ( x - &mu; k ) T &Sigma; k - 1 ( x - &mu; k )
——μkFor average;
——∑k2I is variance;
Background pixel judgment formula such as following formula:
B G a u s s ( i , j ) = arg b m i n ( &Sigma; j = 1 b w j > T )
w ^ k N + 1 = ( 1 - a ) w ^ k N + a p ^ ( w k | x N + 1 )
——BGaussFor background pixel point;
A is learning rate;
——wkFor initializing weight,For wkExpected value;
T is background threshold;
Mixed Gauss model carrys out the feature of each pixel in phenogram picture using K Gauss model, obtains in a new two field picture Update mixed Gauss model afterwards, mated with mixed Gauss model with each pixel in present image, if success, judge This point is background dot, otherwise for foreground point;Take an overall view of whole Gauss model, mainly have variance and two parameters of average to determine, right Average and the study of variance, take different study mechanisms, will directly influence stability, accuracy and the convergence of model; Due to being it is therefore desirable to variance in Gauss model and two parameters of average in real time more to the modeling of the background extracting of moving target Newly;For improving the learning capacity of model, improved method adopts different learning rates to the renewal of average and variance;For improving numerous Under busy scene, the Detection results of big and slow moving target, introduce the concept of weights average, set up background image in real time more Newly, then in conjunction with weights, weights average and background image, the classification of foreground and background is carried out to pixel;
(2) to background double layer, corresponding pixel is made difference and is obtained absolute value, obtains static target, to background double layer error image Carry out binaryzation;By HSI cast shadow suppressing, and eliminate the shade of target respective pixel point, closed operation is carried out to bianry image, disappears Except discontinuous cavity;
(3) according to focal length of camera, height and angle, target pixel points in image are weighted, the target pixel points weights of distant place Increase, target pixel points weights nearby reduce, and obtain the object pixel value preset after weighting, when a threshold is reached, Parking Enumerator S adds 1, when S is more than threshold value, then preserves this bianry image;
(4) this bianry image is filtered, and the target in image is split and contour detecting, setting detection is sensitive Degree, the target being more than the threshold of sensitivity to pixel value draws rectangle frame, judges whether this target is vehicle according to length-width ratio, works as mesh When being designated as vehicle, record the diagonal intersecting point coordinate of this rectangle frame, be stored in Parking state table;
(5) pixel is processed and analyzed, judged whether this target is vehicle;If this target is judged as vehicle, triggering Parking is reported to the police and main background present frame is assigned to time background, proceeds comparison in difference;When there is no target in secondary background, Retain image as pure background;
(6) when the prospect in secondary background model is space-time, store current background image as pure background, stop when having detected Only during target, it is compared by the storage of main background present frame and with pure background image, if difference is less than threshold in two field pictures Skip to step (2) during value to continue to run with, if difference is more than threshold value in two field pictures, replace time back of the body with this main background present frame Scape, skips to step (2) and proceeds, when Parking is detected again, comparison object center point coordinate, and judge whether target sails From, and update state table.
CN201310104633.2A 2013-03-28 2013-03-28 Parking event detection method based on double layers of backgrounds Active CN103198300B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201611069411.1A CN106778540B (en) 2013-03-28 2013-03-28 Parking detection is accurately based on the parking event detecting method of background double layer
CN201310104633.2A CN103198300B (en) 2013-03-28 2013-03-28 Parking event detection method based on double layers of backgrounds

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310104633.2A CN103198300B (en) 2013-03-28 2013-03-28 Parking event detection method based on double layers of backgrounds

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN201611069411.1A Division CN106778540B (en) 2013-03-28 2013-03-28 Parking detection is accurately based on the parking event detecting method of background double layer

Publications (2)

Publication Number Publication Date
CN103198300A CN103198300A (en) 2013-07-10
CN103198300B true CN103198300B (en) 2017-02-08

Family

ID=48720836

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201310104633.2A Active CN103198300B (en) 2013-03-28 2013-03-28 Parking event detection method based on double layers of backgrounds
CN201611069411.1A Active CN106778540B (en) 2013-03-28 2013-03-28 Parking detection is accurately based on the parking event detecting method of background double layer

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201611069411.1A Active CN106778540B (en) 2013-03-28 2013-03-28 Parking detection is accurately based on the parking event detecting method of background double layer

Country Status (1)

Country Link
CN (2) CN103198300B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10489662B2 (en) * 2016-07-27 2019-11-26 Ford Global Technologies, Llc Vehicle boundary detection
CN106791275B (en) * 2016-12-19 2019-09-27 中国科学院半导体研究所 A kind of image event detection marker method and system
CN109101934A (en) * 2018-08-20 2018-12-28 广东数相智能科技有限公司 Model recognizing method, device and computer readable storage medium
CN109285341B (en) * 2018-10-31 2021-08-31 中电科新型智慧城市研究院有限公司 Urban road vehicle abnormal stop detection method based on real-time video
CN109741350B (en) * 2018-12-04 2020-10-30 江苏航天大为科技股份有限公司 Traffic video background extraction method based on morphological change and active point filling
CN112101279B (en) * 2020-09-24 2023-09-15 平安科技(深圳)有限公司 Target object abnormality detection method, target object abnormality detection device, electronic equipment and storage medium
CN113409587B (en) * 2021-06-16 2022-11-22 北京字跳网络技术有限公司 Abnormal vehicle detection method, device, equipment and storage medium
CN114724063B (en) * 2022-03-24 2023-02-24 华南理工大学 Road traffic incident detection method based on deep learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1350941A (en) * 2000-10-27 2002-05-29 新鼎系统股份有限公司 Method and equipment for tracking image of moving vehicle
CN102244769A (en) * 2010-05-14 2011-11-16 鸿富锦精密工业(深圳)有限公司 Object and key person monitoring system and method thereof
CN102314591A (en) * 2010-07-09 2012-01-11 株式会社理光 Method and equipment for detecting static foreground object

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6985859B2 (en) * 2001-03-28 2006-01-10 Matsushita Electric Industrial Co., Ltd. Robust word-spotting system using an intelligibility criterion for reliable keyword detection under adverse and unknown noisy environments
KR100799333B1 (en) * 2003-12-26 2008-01-30 재단법인 포항산업과학연구원 Apparatus for Determining Type of Vehicle for Controling Parking and Method Thereof
CN101447082B (en) * 2008-12-05 2010-12-01 华中科技大学 Detection method of moving target on a real-time basis
CN102096931B (en) * 2011-03-04 2013-01-09 中南大学 Moving target real-time detection method based on layering background modeling
CN102496281B (en) * 2011-12-16 2013-11-27 湖南工业大学 Vehicle red-light violation detection method based on combination of tracking and virtual loop
CN102819952B (en) * 2012-06-29 2014-04-16 浙江大学 Method for detecting illegal lane change of vehicle based on video detection technique
US20140126818A1 (en) * 2012-11-06 2014-05-08 Sony Corporation Method of occlusion-based background motion estimation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1350941A (en) * 2000-10-27 2002-05-29 新鼎系统股份有限公司 Method and equipment for tracking image of moving vehicle
CN102244769A (en) * 2010-05-14 2011-11-16 鸿富锦精密工业(深圳)有限公司 Object and key person monitoring system and method thereof
CN102314591A (en) * 2010-07-09 2012-01-11 株式会社理光 Method and equipment for detecting static foreground object

Also Published As

Publication number Publication date
CN103198300A (en) 2013-07-10
CN106778540A (en) 2017-05-31
CN106778540B (en) 2019-06-28

Similar Documents

Publication Publication Date Title
CN103198300B (en) Parking event detection method based on double layers of backgrounds
WO2021208275A1 (en) Traffic video background modelling method and system
CN109447018B (en) Road environment visual perception method based on improved Faster R-CNN
CN101916383B (en) Vehicle detecting, tracking and identifying system based on multi-camera
CN111368687A (en) Sidewalk vehicle illegal parking detection method based on target detection and semantic segmentation
CN102819764B (en) Method for counting pedestrian flow from multiple views under complex scene of traffic junction
CN102867417B (en) Taxi anti-forgery system and taxi anti-forgery method
CN105512720A (en) Public transport vehicle passenger flow statistical method and system
CN110517288A (en) Real-time target detecting and tracking method based on panorama multichannel 4k video image
CN111695514B (en) Vehicle detection method in foggy days based on deep learning
CN104978567B (en) Vehicle checking method based on scene classification
CN111914698B (en) Human body segmentation method, segmentation system, electronic equipment and storage medium in image
CN101458871A (en) Intelligent traffic analysis system and application system thereof
EP2813973B1 (en) Method and system for processing video image
CN106778633B (en) Pedestrian identification method based on region segmentation
CN111598069B (en) Highway vehicle lane change area analysis method based on deep learning
CN115346177A (en) Novel system and method for detecting target under road side view angle
Zheng et al. A review of remote sensing image object detection algorithms based on deep learning
CN114049572A (en) Detection method for identifying small target
CN115546763A (en) Traffic signal lamp identification network training method and test method based on visual ranging
CN115376108A (en) Obstacle detection method and device in complex weather
CN114639067A (en) Multi-scale full-scene monitoring target detection method based on attention mechanism
Li et al. Intelligent transportation video tracking technology based on computer and image processing technology
CN113177439A (en) Method for detecting pedestrian crossing road guardrail
Muniruzzaman et al. Deterministic algorithm for traffic detection in free-flow and congestion using video sensor

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20201015

Address after: 226019 No.205, building 6, Nantong University, No.9, Siyuan Road, Nantong City, Jiangsu Province

Patentee after: Center for technology transfer, Nantong University

Address before: 226019 Jiangsu city of Nantong province sik Road No. 9

Patentee before: NANTONG University

TR01 Transfer of patent right
CP03 Change of name, title or address

Address after: 226001 No.9, Siyuan Road, Chongchuan District, Nantong City, Jiangsu Province

Patentee after: Nantong University Technology Transfer Center Co.,Ltd.

Address before: 226019 No.205, building 6, Nantong University, No.9, Siyuan Road, Nantong City, Jiangsu Province

Patentee before: Center for technology transfer, Nantong University

CP03 Change of name, title or address
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20130710

Assignee: Nantong qianrui Information Technology Co.,Ltd.

Assignor: Nantong University Technology Transfer Center Co.,Ltd.

Contract record no.: X2023980053321

Denomination of invention: A Parking Event Detection Method Based on Double Layer Background

Granted publication date: 20170208

License type: Common License

Record date: 20231221

EE01 Entry into force of recordation of patent licensing contract