CN100459704C - Intelligent tunnel safety monitoring apparatus based on omnibearing computer vision - Google Patents
Intelligent tunnel safety monitoring apparatus based on omnibearing computer vision Download PDFInfo
- Publication number
- CN100459704C CN100459704C CNB2006100516330A CN200610051633A CN100459704C CN 100459704 C CN100459704 C CN 100459704C CN B2006100516330 A CNB2006100516330 A CN B2006100516330A CN 200610051633 A CN200610051633 A CN 200610051633A CN 100459704 C CN100459704 C CN 100459704C
- Authority
- CN
- China
- Prior art keywords
- fire
- formula
- area
- vehicle
- tunnel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 45
- 238000000034 method Methods 0.000 claims abstract description 29
- 206010039203 Road traffic accident Diseases 0.000 claims abstract description 26
- 238000004891 communication Methods 0.000 claims abstract description 15
- 238000003860 storage Methods 0.000 claims abstract description 8
- 238000001514 detection method Methods 0.000 claims description 56
- 230000033001 locomotion Effects 0.000 claims description 45
- 239000003546 flue gas Substances 0.000 claims description 43
- UGFAIRIUMAVXCW-UHFFFAOYSA-N Carbon monoxide Chemical compound [O+]#[C-] UGFAIRIUMAVXCW-UHFFFAOYSA-N 0.000 claims description 39
- 238000009826 distribution Methods 0.000 claims description 37
- 230000000007 visual effect Effects 0.000 claims description 26
- 230000003068 static effect Effects 0.000 claims description 25
- 230000008859 change Effects 0.000 claims description 21
- 238000012545 processing Methods 0.000 claims description 21
- 238000010168 coupling process Methods 0.000 claims description 17
- 238000005859 coupling reaction Methods 0.000 claims description 17
- 238000006243 chemical reaction Methods 0.000 claims description 13
- 239000011159 matrix material Substances 0.000 claims description 12
- 238000011161 development Methods 0.000 claims description 11
- 230000018109 developmental process Effects 0.000 claims description 11
- 238000004422 calculation algorithm Methods 0.000 claims description 10
- 230000007306 turnover Effects 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 claims description 9
- 230000000295 complement effect Effects 0.000 claims description 9
- 238000010586 diagram Methods 0.000 claims description 8
- 230000011218 segmentation Effects 0.000 claims description 8
- 230000003044 adaptive effect Effects 0.000 claims description 6
- 230000008878 coupling Effects 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 6
- 230000007797 corrosion Effects 0.000 claims description 5
- 238000005260 corrosion Methods 0.000 claims description 5
- 230000013011 mating Effects 0.000 claims description 5
- 238000012360 testing method Methods 0.000 claims description 4
- 230000037396 body weight Effects 0.000 claims description 3
- 230000001186 cumulative effect Effects 0.000 claims description 3
- 230000006866 deterioration Effects 0.000 claims description 3
- 230000003467 diminishing effect Effects 0.000 claims description 3
- 239000000463 material Substances 0.000 claims description 3
- 229920006395 saturated elastomer Polymers 0.000 claims description 3
- 230000008569 process Effects 0.000 abstract description 11
- 238000003384 imaging method Methods 0.000 description 31
- 238000005516 engineering process Methods 0.000 description 21
- 238000013461 design Methods 0.000 description 10
- 230000006378 damage Effects 0.000 description 7
- 238000000926 separation method Methods 0.000 description 7
- 238000004458 analytical method Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 238000005259 measurement Methods 0.000 description 6
- 238000012423 maintenance Methods 0.000 description 5
- 230000006698 induction Effects 0.000 description 4
- 238000004088 simulation Methods 0.000 description 4
- 239000000779 smoke Substances 0.000 description 4
- 208000027418 Wounds and injury Diseases 0.000 description 3
- 230000000903 blocking effect Effects 0.000 description 3
- 208000014674 injury Diseases 0.000 description 3
- 238000009434 installation Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 241001269238 Data Species 0.000 description 2
- 240000007594 Oryza sativa Species 0.000 description 2
- 235000007164 Oryza sativa Nutrition 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 230000003542 behavioural effect Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000002485 combustion reaction Methods 0.000 description 2
- 230000007812 deficiency Effects 0.000 description 2
- 238000009792 diffusion process Methods 0.000 description 2
- 208000001491 myopia Diseases 0.000 description 2
- 239000002245 particle Substances 0.000 description 2
- 230000011514 reflex Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000002441 reversible effect Effects 0.000 description 2
- 235000009566 rice Nutrition 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 235000007926 Craterellus fallax Nutrition 0.000 description 1
- 240000007175 Datura inoxia Species 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 238000005054 agglomeration Methods 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000009412 basement excavation Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 235000019504 cigarettes Nutrition 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 239000000428 dust Substances 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000004079 fireproofing Methods 0.000 description 1
- 238000010304 firing Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 208000015181 infectious disease Diseases 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000012806 monitoring device Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 239000011347 resin Substances 0.000 description 1
- 229920005989 resin Polymers 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 230000007480 spreading Effects 0.000 description 1
- 238000003892 spreading Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000005514 two-phase flow Effects 0.000 description 1
- 238000011144 upstream manufacturing Methods 0.000 description 1
- 238000009423 ventilation Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Landscapes
- Traffic Control Systems (AREA)
Abstract
The monitoring equipment includes microprocessor, video sensor in use for monitoring site of tunnel, and communication module for communication outside. Microproceesor includes following modules: reading module for image dada in use for reading information of video image transferred from video sensor; file storage module in use for storing data collected by video sensor to storage device; real time playing module on site connected to external display device is in use for playing monitored image on site in real time. Output of the video sensor is connected to microprocessor through communication. Using all directional computer vision sensor monitors tunnel site, processes and analyzes images. Using detected variation character for fire flame in early phase, the invention finds early fire in tunnel as well as obtains traffic data including vehicle flowrate, and occupancy ratio, speed, and traffic accident etc.
Description
(1) technical field
The invention belongs to the application aspect tunnel safety monitoring of omnidirectional computer vision sensor technology, image recognition technology, Computer Control Technology and the communication technology, relate to technical fields such as tunnel safety operation management.
(2) background technology
Along with China's rapid economy development, the construction project of highway and vcehicular tunnel grows with each passing day, the tunnel safety problems of operation seems more and more outstanding, and except that the civil engineering quality in tunnel itself, the supervision in tunnel and control and management have become the important topic of the normal operation of Road Tunnel Safety.Data shows according to statistics, all has every year the tunnel safety accident to take place both at home and abroad, the loss maximum that generation caused of tunnel fire hazard wherein, and the maximum characteristics of tunnel fire hazard are that smog is big, temperature is high.Tunnel space is little, in case breaking out of fire, the natural draught system difficulty, so smog is bigger, the heat that burning produces also is difficult for distributing.The incidence of tunnel fire hazard is to become positive correlation with growing vehicle flowrate, the maintaining of the short circuit of the rear-end impact of vehicle, electrical equipment, vehicle and loading dangerous goods are potential roots that fire takes place in tunnel fire hazard, every year, vehicle fire was near ten thousand times in travels down at present to show China according to a statistics, and this manages for tunnel safety is a very large potential safety hazard.
Tunnel fire hazard be shouldn't be careless, Blanc tunnel fire disaster to 2000 year Austrian tunnel train firing accident from Europe, eurotunnel recurred great fire incident in recent years, and many heavy losses all are because due to the tunnel fire hazard detection means that Safety Design is not enough and shortage is good.In case tunnel breaking out of fire, and meeting rapid spread, it is very difficult to put out a fire to save life and property, and easily causes heavy losses.
Fire smoke flows and to belong to two phase flow and flow, and being suspended in wherein flue gas particle number, particle agglomeration effect, Smoke Turbulent Effect etc. is the important factor in order of fire image detection optical characteristics, is the important content of detection and study of warning.Traditional fire alarm system is generally based on infrared sensor and Smoke Sensor, and these systems adopt the detectable concentration method more, and flame detection itself not, so its rate of false alarm height, detection time are longer, can't forecast some situation.In the fire alarm of the such large space occasion in tunnel, it is very faint that the sensor signal becomes, even high-precision transducer also can can't be worked owing to all interference noises.At present, adopted the method for the detection of comparative maturity in some tunnels, as sense cigarette, temperature-sensitive, sensitive detector, they utilize smog, the temperature of fire disaster flame, the characteristic of light etc. that fire is surveyed respectively.But in the more abominable tunnel of large space, large tracts of land, environment, can't bring into play the effect of existing fire detection equipment, and the utilization digital image processing techniques utilize the picture characteristics of fire disaster flame can solve the detection problem in tunnel.
Along with the development of computer technology, image processing techniques, mechanics of communication, control technology, ethernet technology and bussing technique have been broken through original technical bottleneck, make the supervision and the shared possibility that becomes of control system high speed information in tunnel.At present the tunnel safety monitoring aspect is mainly concerned with following several aspect: 1) monitoring, visibility and the measuring wind speed of CO, SO, NO etc., ventilating system control; The control of 2) power supply, fire alarm, emergency call, water pump control; 3) pass in accident video monitoring and warning indication, tunnel and track/open, speed limit indication; 4) highly control, illuminator control, track guiding, SOS (voice suggestion); 5) traffic data is measured (counting, lane occupancy ratio, vehicle classification, vehicle speed measurement etc.).
Machine vision is the extension of human vision.By machine vision and image recognition technology, can find fire and other various traffic safety problem immediately exactly, this is a undisputable fact.The basis of image monitoring rapidity is that the information that vision is accepted is communication media with light; And image information is abundant and directly perceived, and more the identification of incipient fire and judgement are laid a good foundation, and other any fire detection technology all can not provide so abundant and information intuitively.In addition, the Primary Component image sensing assembly of image monitoring is by optical lens and extraneous generation indirect contact, and this structure has guaranteed that the image monitoring technology can be than use in the tunnel safety supervision of adverse circumstances.Thus, utilize image recognition technology in tunnel safety is surveyed, to have following very remarkable advantages: 1) can in large space, large-area tunnel environment, use; 2) use has high reliability in the tunnel of many dust, high humility; 3) can make fast reaction to the image information in the fire phenomena; 4) can provide current situation of traffic information in the fire information and tunnel intuitively; 5) can satisfy the ask for something of other tunnel safety monitorings simultaneously, measure (counting such as visibility, the traffic data that can obtain in the tunnel, lane occupancy ratio, vehicle classification, vehicle speed measurement etc.), information such as traffic accident such as vehicle rear-end collision take place, be one-machine-multi-function, can significantly improve the ratio of performance to price.
A kind of intelligent tunnel safety monitoring apparatus based on machine vision can be considered to be made of two systems haply, and promptly image-type fire alarm system and image-type traffic data are measured and the traffic accident detection system.
The image-type fire alarm system utilizes digital image processing techniques to realize automatic fire alarm.Day by day strict fire safety evaluating requires and high-tech developing rapidly, makes modes such as detection and early warning just towards image conversion and intelligent development, then is detection method according to flame characteristic based on the fire detecting method of image.Therefore, countries in the world are all at fire detecting method and the equipment of being devoted to research and develop energy early prediction fire.Digital Image Processing and mode identification technology realize that fire forecast is compared with traditional forecasting procedure and can improve forecast precision effectively, shorten the fire information of asking, provide abundanter of giving the correct time in advance etc. greatly.
Same with above-mentioned image-type fire alarm system, the image-type traffic data is measured and the traffic accident detection system also can realize the vehicle flowrate in the tunnel by Digital Image Processing and mode identification technology, lane occupancy ratio, vehicle classification, traffic data such as the speed of a motor vehicle and some other traffic accident that takes place in the tunnel detect, and the collection that Digital Image Processing and mode identification technology realize transport information is compared with the method for traditional ground induction coil induction transport information can improve accuracy of detection effectively, finish the measurement of multinomial data simultaneously, easy to maintenance and the more transport information of horn of plenty is provided.
Intelligent tunnel safety monitoring apparatus, be a kind of be core with the computer, the tunnel fire hazard and the traffic accident automatic monitoring warning system that develop in conjunction with photoelectric technology and computer image processing technology.Wherein tunnel fire hazard image detection method is a kind of novel fire detecting method based on Digital Image Processing and analysis.It utilizes camera that the scene is monitored, simultaneously to take the photograph consecutive image input computer, constantly carry out image processing and analysis, come detection of fires by body variation characteristic to incipient fire flame in the tunnel; The image detection method of traffic data measurement and traffic accident, it also is a kind of novel tunnel safety detection method based on Digital Image Processing and analysis, also can utilize camera that the scene is monitored, simultaneously to take the photograph consecutive image input computer, constantly carry out image processing and analysis, by the vehicle by the tunnel is discerned, analyze and obtain traffic data and some other traffic accidents that in the tunnel, take place such as vehicle flowrate, lane occupancy ratio, vehicle classification, the speed of a motor vehicle.What tunnel fire hazard took place all is closely-related with vehicle in early days, so these two kinds of detection methods have roughly the same point, promptly will detect the vehicle the tunnel in, discerns according to the feature of judgement then.
Carry out fire and road safety detection at some local The Cloud Terrace that uses at present, though can obtain omnidirectional images, but The Cloud Terrace adopt the mechanical type whirligig exist mechanical wear, maintenance workload big, want consumes energy, algorithm relative complex, shortcoming such as can not handle in real time.
The omnibearing vision sensor ODVS that developed recently gets up (OmniDirectional Vision Sensors) provide a kind of new solution for the panoramic picture that obtains scene in real time.The characteristics of ODVS are looking away (360 degree), can become piece image to the Information Compression in the hemisphere visual field, and the amount of information of piece image is bigger; When obtaining a scene image, the riding position of ODVS in scene is free more; ODVS is without run-home during monitoring environment; Algorithm is simpler during moving object in the detection and tracking monitoring range; Can obtain the realtime graphic of scene.Therefore the fully-directional visual system based on ODVS developed rapidly in recent years, just becoming the key areas in the computer vision research, IEEE held the special seminar (IEEE workshop on Omni-directional vision) of annual omni-directional visual since 2000.Also do not retrieve at present paper and the patent that omnibearing vision sensor is applied to the intelligent tunnel safety monitoring technical field.
Therefore, adopt omnibearing vision sensor ODVS and utilize digital image processing techniques, some features in conjunction with potential safety hazard in vehicle and the tunnel, find rational characteristic criterion, attach most importance to solve the tunnel fire proofing monitoring, while can be monitored other traffic accidents again and gather automatically and finish transport information, is equipped with the intelligentized insight of a pair of to the tunnel.
(3) summary of the invention
Need adopt multiple detection technique in order to overcome to have now simultaneously to tunnel safety monitoring, and tunnel fire hazard early detection difficulty, the False Rate height, at large space, large tracts of land, but can't bring into play the effect of existing fire detection equipment in the relatively abominable tunnel of environmental condition, can not handle in real time, do not bring into play the effect of present video monitoring devices, the deficiency of the means of lack of wisdomization on the level of device and parts, the invention provides and a kind ofly can realize comprehensive real-time security monitoring, in time catch the early sign of the vehicle fire in travelling, so as to take the measure of in time putting out a fire with the damage control that the condition of a disaster caused in minimum zone; In time find various traffic accidents in the tunnel, analyze calculating traffic datas such as the vehicle flowrate obtained, lane occupancy ratio, vehicle classification, the speed of a motor vehicle, thereby can reduce the intelligent tunnel safety monitoring apparatus based on omnidirectional computer vision of the generation of traffic accident by modes such as control vehicle flowrates.
The present invention for the technical scheme that solves its technical problem employing is:
A kind of intelligent tunnel safety monitoring apparatus based on omnidirectional computer vision, described intelligent tunnel safety monitoring apparatus comprises microprocessor, is used to monitor the video sensor at scene, tunnel, is used for and extraneous communication module of communicating by letter, described microprocessor comprises: the view data read module is used to read the video image information of coming from the video sensor biography; File storage module, the storage that is used for video sensor is gathered is to memory; On-the-spot playing module in real time is used to connect outside display device, and the on-site supervision picture is play in real time; The output of described video sensor is connected with microprocessor communication, described video sensor is an omnibearing vision sensor, described vision sensor comprises evagination mirror surface, transparent cylinder, the camera that is used for reflecting monitoring field object, described evagination mirror surface down, described transparent cylinder supports the evagination mirror surface, the camera that is used to take imaging body on the evagination mirror surface is positioned at the inside of transparent cylinder, and camera is positioned on the virtual focus of evagination mirror surface; Described microprocessor also comprises:
The transducer calibration module is used for the parameter of omnibearing vision sensor is demarcated, and sets up the material picture in space and the corresponding relation of the video image that is obtained;
Image launches processing module, and the circular video image that is used for gathering expands into the panorama block diagram, according to a point (x on the circular omnidirectional images
*, y
*) and rectangle column panorama sketch on a point (x
*, y
*) corresponding relation, set up (x
*, y
*) and (x
*, y
*) mapping matrix, shown in the formula (1):
In the following formula,
Be mapping matrix,
Be the picture element matrix on the circular omnidirectional images,
It is the picture element matrix on the rectangle column panorama sketch;
The color model modular converter, be used for color with each pixel of coloured image from the RGB color space conversion to (Cr, Cb) spatial color model;
The motion obj ect detection module is used for from video flowing extract real-time target, adopts a plurality of ADAPTIVE MIXED Gauss models to represent to each picture point, establishes to be used for describing total K of each Gaussian Profile of putting distribution of color, is labeled as respectively:
η(Y
t,μ
t,i,∑
t,i),i=1,2,3…,k
Each Gaussian Profile has different weights respectively
With priority p
T, j=ω
T, i/ σ
I, j, σ wherein
I, jIt is the variance of each Gaussian Profile;
The order ordering from high to low of each Gaussian Profile priority is got and is decided background weights part and threshold value M, has only satisfied
Preceding several distribute and just are considered to background distributions, and other then is that prospect distributes;
When detecting the foreground point, according to priority order with Y
tMate one by one with each Gaussian Profile, if do not represent the Gaussian Profile and the Y of background distributions
tCoupling judges that then this point is the foreground point; Otherwise be background dot;
If do not find any Gaussian Profile and Y when detecting
tCoupling is then removed a Gaussian Profile of priority minimum, and according to Y
tIntroduce a new Gaussian Profile, and give less weights and bigger variance, all Gaussian Profile are carried out the weights normalized again; As if μ Gaussian Profile and Y
tCoupling, then as follows to the right value update formula of i Gaussian Profile,
Wherein α is the constant-right value update rate of expression context update speed, and formula (17) shows to have only and Y
tThe weights of the Gaussian Profile that is complementary just are improved, and the weights of other distribution all are lowered; The parameter of the Gaussian Profile that is complementary in addition, is also upgraded according to formula (18), (19);
μ
t,i=(1-α)μ
t,i+αY
t (18)
σ
t,i 2=(1-α)σ
t,i 2+α(Y
t-μ
t,i)
2 (19)
In the formula, μ
T, kBe K Gauss's brightness expectation, σ
T, kIt is K Gauss's brightness variance;
Behind the parameter of having upgraded Gaussian Profile and each distribution weights, also to recomputate priority and ordering to each distribution, and the number of definite background distributions;
Turnover rate to background dot, static foreground point, sport foreground point is treated with a certain discrimination, and the right value update rate in the formula (17) is changed to β, the turnover rate α with the Gaussian Profile parameter of having any different, and the right value update formula after the change is provided by formula (20):
The moving Object Segmentation module, be used for adopting the connected region detection algorithm to cut apart target according to spatial continuity, offset earlier except the prospect point set F after the background model and expand respectively and corrosion treatment, obtain expansion collection Fe and shrink collecting Fc, by handling resulting expansion collection Fe and shrinking the result that collection Fc can think initial prospect point set F is filled up aperture and removal isolated noise point; Therefore there is the following Fc of relation<F<Fe to set up, then, on expansion collection Fe, detects connected region to shrink collection Fc as starting point, then testing result is designated as { Rei, i=1,2,3 ..., n}, the connected region that will detect gained at last projects on the initial prospect point set F again, gets to the end communication with detection result { Ri=Rei ∩ F, i=1,2,3 ..., n};
After in described moving Object Segmentation module, being partitioned into the target area, extract the static nature of foreground target, comprising boundary rectangle size, area, length-width ratio, median point position, color projection histogram;
Target tracking module, be used to adopt the motion model of second order Kalman filter as target, the position of predicted motion target, to the moving target after the prediction and foreground target when mating, utilize image block coupling to come the position of accurate localizing objects, set up static foreground target and the dynamic motion target of being followed the tracks of between corresponding relation.
Further, described microprocessor also comprises the fire judge module, and described fire judge module comprises:
Area of flame variation characteristic judging unit, be used to utilize the rule of the increase trend continuous, autgmentability of area of flame, that obtain its area Si according to above-mentioned each connected region and judge that whether the flue gas area of flame is the increase in autgmentability, carry out recursion by every two field picture oceangoing ship flue gas area of flame Si in this patent and calculate, ask recursion value S in the flue gas area of flame of next frame image
i, computing formula is provided by formula (22);
In the formula,
Be the recurrence average value of the flue gas area of flame of next frame image,
Be the recurrence average value of the flue gas area of flame of present frame image, Si is the calculated value of present frame flue gas area of flame, and K is a coefficient, less than 1.Calculate the increase trend that shows autgmentability in time with formula among the present invention,
If above-mentioned inequality (23) is set up expression increase trend is arranged, reflected that area of flame is showing the increase trend of autgmentability in time, area of flame expanded Wfire area quantized value be taken as 1, so quantized value is 1 to show that area of flame has autgmentability that 0 expression does not have autgmentability.
Layering variation characteristic judging unit, be used for tending to when taking place be accompanied by and produce a large amount of black flue gases according to vehicle fire, near the car body position again often near the center of flame, it above car body the flue gas that produces in the burning, therefore flame core, internal flame, flame envelope in the time of can utilizing the YCrCb color space to vehicle fire are discerned
Conversion formula from the RGB color space to the YCrCb color space (24) provides,
Y=0.29990*R+0.5870*G+0.1140*B (24)
Cr=0.5000*R-0.4187*G-0.0813*B+128
Cb=-0.1787*R-0.3313*G+0.5000*B+128
Then according to flame image in that (whether the light emitting source that calculates the car body edge drops on flame image at (Cr for Cr, Cb) spatial distributions model, Cb) in the spatial distributions model, be used as judging an important evidence of flame point, computing formula is provided by formula (25)
In the formula (25)
Be the sample standard average of flame point Cr, Cb, A, B, C are respectively the coefficients that is come out by sample standard deviation and mean value computation;
The sample standard average of Cr, the Cb of flue gas in equally also can constantly being risen, A, B, C also judge whether it is flue gas with formula (25);
Edge variation feature judge module, be used for having very obvious characteristics according to the tunnel incipient fire, because the outline edge comparison rule of the vehicle that does not have an accident is consistency, visual angle from omnibearing vision sensor, the top view in tunnel mainly can detect the length of vehicle and wide, auto model is handled as a simple rectangular model, carry out area relatively by the resulting connected region of aforementioned calculation and one with the rectangle that is just in time containing this connected region, calculate its ratio size with formula (26);
Area
Rate iThe area ratio rate of expression T certain tracking target constantly, this value is big more to show that detected vehicle is more near rectangle, otherwise more little this detected vehicle that just shows departs from rectangular model more, introduce area simultaneously and reflect the edge variation situation of tracing object, if the area that is calculated for K time has the trend of diminishing than rate of change or less than a threshold value k continuously than rate of change
Area, just thinking has edge variation, and it judges that relational expression is shown in (27):
Body variation characteristic judging unit, be used for rule according to tunnel fire hazard generation and development, the vehicle that travels in the tunnel can be simplified to a cuboid, body just can take place the vehicle that catches fire when having only breaking out of fire changes, be varied to complicated body, therefore can with a rectangle be communicated with the area district and carry out matching ratio more, mate calculating with formula (26), matching similarity is more for a short time to show that body changes more greatly and the probability of breaking out of fire is just high more, just can affirm breaking out of fire when body develops into whole tunnel cross section; When body shows that fire spreads during to the longitudinal development in tunnel.
Whole moving characteristic judging unit is used for judging when the judgement of layering variation characteristic detects many places and exists the situation of flame point and exists flame motion track, W
Fire moveBe 1, otherwise be 0;
Comprehensive judging unit, be used for basis in five kinds of above-mentioned flames judgements, then comprehensively judge to reduce the disconnected rate of erroneous judgement, can judge simultaneously the degree of fire, the weighted comprehensive judgement is calculated and is carried out in module 33, comprehensive judgment formula is provided by formula (28), has adopted weighting scheme in the comprehensive judgement
W
fire?alarm=K
fire?pattern×W
fire?pattern+K
fire?color×W
fire?color+K
fire?move×W
fire?move+K
fire?area×W
fire?area+K
fire?body×W
fire?body
(28)
In the formula:
K
Fire patternWeight coefficient for the edge variation feature.
K
Fire colorWeight coefficient for the layering variation characteristic.
K
Fire moveWeight coefficient for whole moving characteristic.
K
Fire areaWeight coefficient for area change.
K
Fire bodyWeight coefficient for the body variation.
Further again, described microprocessor also comprises:
Volume of traffic judge module was used in the unit interval, calculated the vehicle number by a certain place of road, a certain section or a certain road, and its relational expression is represented with formula (29):
Q=V*K (29)
Wherein, Q is the magnitude of traffic flow, and V is a section mean speed, and K is a vehicle density.
The average speed computing module, be used for average speed and interval average speed computing time, in case enter during fixed virtual detection line, start to produce a new object, simultaneously with lane number RoadwayNo and the zero hour StartTime assignment give this object, then this object is followed the tracks of, when this object touches next bar virtual detection line, give this object with concluding time EndTime assignment, the computation interval average speed, computing formula is provided by formula (28);
In the formula: X is the total length of being separated by perpendicular to two dummy line of track picture,
Interval average speed for this object:
The interval average speed that some vehicles on some tracks have been arranged just can access the interval average speed of vehicle on some tracks by formula (29);
The lane occupancy ratio computing module is used to calculate vehicle space occupation rate and time occupancy.Space occupancy is to record all vehicles occupy on the known detection highway section the length ratio with road section length in a flash, representing with Rs;
Time occupancy is meant that in the unit interval vehicle is represented with Rt by the cumulative time of a certain section and the ratio of unit minute, can be calculated by through type (28)
Further, described microprocessor also comprises: fleet's detection module that blocks up, be used for being judged according to the velocity information that obtains, as the traffic with the tunnel can be divided into very unimpeded, unimpeded,, several situations such as crowded, crowded, obstruction, free-flow speed is 70km/h in the tunnel, when being 60~70km/h, can think that then this tunnel segment is very unimpeded by the resulting average speed of aforementioned calculation; During 50~60km/h, then can think unimpeded; During 40~50km/h, can think that this highway section is more unimpeded; During 30~40km/h, can think that the unimpeded situation in this highway section is general; During 20~30km/h, can think that this highway section is more crowded; During 10~20km/h, can think that this highway section is crowded; Less than 10km/h even be 0, then can think this road congestion.
Described microprocessor also comprises: drive in the wrong direction and line judge module more, be used for lane number RoadwayNo according to each detected tracing object, by checking the lane number RoadwayNo that follows the tracks of vehicle confirms whether to exist retrograde, the method of judging is exactly that the lane number RoadwayNo lane number RoadwayNo information entrained with following the tracks of vehicle is carried out odd-even check, if one be odd number and another to be even number so just be judged to be is retrograde; According to a lane number RoadwayNo information object objects trigger of not carrying the movement locus of next bar virtual detection line and this destination object opposite with the road driving direction; As by having occurred following the tracks of the entrained lane number RoadwayNo information inconsistency of vehicle on the lane number RoadwayNo, be judged to be vehicle and get over line.
Described microprocessor also comprises: traffic accident, parking offense and hypervelocity judge module, and be used for calculating resulting velocity information and detect according to formula (28), be hypervelocity if the average speed that calculates just can be regarded as above the tunnel speed limit; If institute's tracking target object does not have motion within a certain period of time and on this track the fleet in time period block up detect for unimpeded situation just regard as more than general this tracking target to as if parking offense; If a plurality of destination objects of following the tracks of do not have motion within a certain period of time and on one or above track the fleet in time period block up to detect and just think more than general traffic accident might take place to unimpeded situation, the then fleet's jam situation rapid deterioration in time period on or the above track as time passes, the slope that fleet's jam situation changes surpass certain threshold value explanation the traffic accident probability takes place to be increased.
The operation principle of omnibearing vision sensor of the present invention is: the manufacturing technology scheme of the opticator of ODVS camera head, ODVS camera head are mainly constituted by vertically downward catadioptric mirror with towards last camera.It is concrete that to constitute be to be fixed on bottom by the cylinder of transparent resin or glass by the image unit that collector lens and CCD constitute, the top of cylinder is fixed with the catadioptric mirror of a downward deep camber, the coniform body that between catadioptric mirror and collector lens, has a diameter to diminish gradually, this coniform body is fixed on the middle part of catadioptric mirror, and the purpose of coniform body is the light saturated phenomenon that causes in order to prevent superfluous light from injecting in cylinder inside.Fig. 2 is the schematic diagram of the optical system of expression omnibearing vision sensor of the present invention.
Catadioptric omnidirectional imaging system can be carried out imaging analysis with the pin-hole imaging model, but obtaining the perspective panorama picture must be to the contrary projection of the real scene image of gathering, thereby amount of calculation is big, particularly is used in the vehicle of running at high speed is monitored, and must satisfy the requirement of real-time.
The coordinate of the horizontal coordinate of object point and corresponding picture point is linear in the scene just can guarantee that horizontal scene is undistorted, omnibearing vision device as tunnel safety monitoring is installed in tunnel top, therefore monitor the vehicle condition on the horizontal direction in the tunnel, when the catadioptric minute surface of design omnibearing vision device, will guarantee in the horizontal direction indeformable.
At first select for use CCD (CMOS) device and imaging len to constitute camera in the design, preresearch estimates system overall dimension on the basis that the camera inner parameter is demarcated is determined the mirror surface shape parameter according to the visual field of short transverse then.
As shown in Figure 1, the summit of the projection centre C of the camera horizontal scene h of distance place speculum above the horizontal scene of road is above projection centre, apart from projection centre zo place.Be that the origin of coordinates is set up coordinate system with the camera projection centre among the present invention, the face shape of speculum is with z (X) function representation.The pixel q of distance images central point ρ has accepted from horizontal scene O point (apart from Z axle d), at the light of mirror M point reflection in as the plane.Horizontal scene is undistorted to require the coordinate of the horizontal coordinate of scene object point and corresponding picture point linear;
d(ρ)=αρ (1)
ρ is and the distance of the face shape central point of speculum in the formula (1), and α is the magnification ratio of imaging system.
If the normal that speculum is ordered at M and the angle of Z axle are γ, the angle of incident ray and Z axle is Φ, and the angle of reflection ray and Z axle is θ.Then
By reflection law
2γ=φ-θ
∴
Obtain the differential equation (7) by formula (2), (4), (5) and (6)
In the formula;
Obtain the differential equation (9) by formula (7)
Obtain formula (10) by formula (1), (5)
By formula (8), (9), (10) and initial condition, separate the digital solution that the differential equation can obtain reflecting mirror surface shape.The main digital reflex mirror of system's overall dimension is from the distance H o and the aperture of a mirror D of camera.Select suitable camera according to application requirements during the refractive and reflective panorama system design, calibrate Rmin, the focal distance f of lens is determined the distance H o of speculum from camera, calculates aperture of a mirror Do by (1) formula.
Determining of system parameters:
Determine system parameters af according to the visual field of using desired short transverse.Obtain formula (11) by formula (1), (2) and (5), done some simplification here, with z (x) ≈ z
0, main consideration is smaller with respect to the change in location of minute surface and camera for the height change of minute surface;
With the inconocenter point largest circumference place in the center of circle as the plane
Corresponding visual field is ф max.Then can obtain formula (12);
The imaging simulation adopts the direction opposite with actual light to carry out.If light source is in the camera projection centre, equally spaced selected pixels point in the picture plane by the light of these pixels, intersects with horizontal plane after mirror reflects, if intersection point is equally spaced, illustrates that then speculum has the distortionless character of horizontal scene.The imaging simulation can be estimated the imaging character of speculum on the one hand, can calculate aperture of a mirror and thickness exactly on the other hand.
Further specify the present invention and in the implementation process of omnidirectional images collection in real time, relate to demarcation and these 2 key issues of identification:
(1) how to demarcate the pixel distance in the imaging plane of omnibearing vision sensor and the corresponding relation of actual three dimensions distance.Because omni-directional visual video camera imaging plane is two-dimentional, is unit of measurement with the pixel, on imaging plane, when observing the segment distance of vehicle by demarcating, can only know its pixel distance; And the distance that actual vehicle is passed through is unknown, only finds both corresponding relations, could go out the displacement of vehicle reality according to the distance calculation that vehicle moves in image.
(2) recognizer of traffick in the omni-directional visual camera field of view.When vehicle process virtual detection line, the time how system should go identification and registration of vehicle to pass through.
The demarcation of omni-directional visual camera field of view distance relates to the theory of imaging geometry, and the three-dimensional scenic of objective world is projected the two-dimentional image plane of video camera, need set up the model of video camera and describe.By determining the physical parameter and the direction parameter of video camera, could decide the tolerance of image plane, thereby calculate the actual range that vehicle passes through.
Image transformation relates to the conversion between the different coordinates.In the imaging system of video camera, what relate to has following 4 coordinate systems; (1) real-world coordinates is XYZ; (2) with the video camera be the coordinate system x^y^z^ that formulate at the center; (3) photo coordinate system, formed photo coordinate system x*y*o* in video camera; (4) computer picture coordinate system, the coordinate system MN that the computer-internal digital picture is used is a unit with the pixel.
According to the different transformational relation of above several coordinate systems, just can obtain needed omnidirectional vision camera imaging model, converse the corresponding relation of two dimensional image to three-dimensional scenic.The approximate perspective imaging analytical method that adopts catadioptric omnibearing imaging system among the present invention is with the formed corresponding relation that is converted to three-dimensional scenic as the plane coordinates two dimensional image in the video camera, Fig. 3 is general perspective imaging model, d is an object height, ρ is an image height, t is an object distance, and F is image distance (equivalent focal length).Can obtain formula (13)
When the design of the catadioptric omnibearing imaging system that above-mentioned horizontal scene does not have, require the coordinate of the horizontal coordinate of scene object point and corresponding picture point linear, represent suc as formula (1); Comparison expression (13), (1), horizontal as can be seen scene does not have the be imaged as perspective imaging of the catadioptric omnibearing imaging system of distortion to horizontal scene.Therefore with regard to horizontal scene imaging, the catadioptric omnibearing imaging system that horizontal scene can not had distortion is considered as having an X-rayed camera, and α is the magnification ratio of imaging system.If the projection centre of this virtual perspective camera is C point (seeing accompanying drawing 3), its equivalent focal length is F.Comparison expression (13), (1) formula can obtain formula (14);
Obtain formula (15) by formula (12), (14)
Carry out the system imaging simulation according to above-mentioned omnidirectional vision camera imaging model, by the camera projection centre send through in the pixel planes equidistantly after the reflection of the light family of pixel, intersection point on the horizontal road face of distance projection centre 5m is equally spaced basically, as shown in Figure 4.Therefore according in the above-mentioned design principle this patent relation between the coordinate of the coordinate of tunnel horizontal plane and corresponding comprehensive picture point being reduced to linear relationship, that is to say that design by mirror surface be XYZ to the conversion of photo coordinate system with real-world coordinates can be the linear dependence of ratio with magnification ratio α.Be conversion below from photo coordinate system to the used coordinate system of computer-internal digital picture, the image coordinate unit that uses in the computer is the number of discrete pixel in the memory, so also need round the imaging plane that conversion just can be mapped to computer to reality as the coordinate on plane, its conversion expression formula is for to be provided by formula (16);
In the formula: Om, On are respectively the line number and the columns at the some pixel place that the initial point of image plane shone upon on the computer picture plane; Sx, Sy are respectively scale factor in the x and y direction.Determining of Sx, Sy is by placing scaling board apart from the Z place between camera and mirror surface, video camera is demarcated the numerical value that obtains Sx, Sy, and unit is (pixel); Om, On.Determine it is that unit is (pixel) according to selected camera resolution pixel.
Monitor this angle from image/video, moving object detection and tracking is in the bottom of whole intelligent tunnel safety monitoring apparatus, is various follow-up advanced processes as detecting transport information and detecting the basis of vehicle fire in traffic event information and the tunnel etc.Moving object detection is meant extract real-time target from video flowing, and motion target tracking then refers to target is carried out continuous tracking to determine its movement locus.Because the omnibearing vision sensor among the present invention is mounted in the top in tunnel, therefore the situation that can be used as static video camera detects the foreground point, the ADAPTIVE MIXED Gauss model that scholars such as background model employing Stauffer propose, its do exercises target detection with follow the tracks of handling process as shown in Figure 6, in this flow process, carry out the foundation and the renewal of background model earlier, obtain the prospect point set by background subtraction, carrying out target then cuts apart, after being partitioned into the target area, can extract the static nature of foreground target, comprise the boundary rectangle size, area, length-width ratio, the median point position, color projection histogram etc.; Then will carry out motion target tracking, the purpose of motion target tracking is to determine the movement locus of each moving target, and the movement locus of car has been arranged, and just can calculate 1) vehicle flowrate; 2) instantaneous velocity, time mean speed, space mean speed; 3) time occupancy, space occupancy; 4) time headway; 5) vehicle length classification; 6) fleet's length.Also can calculate 1) fleet blocks up; 2) drive in the wrong direction; 3) hypervelocity; 4) get over line; 5) traffic accident.The key of asking movement locus is to set up corresponding relation between the static foreground target that detects gained and the dynamic motion target of being followed the tracks of, and the foundation of this corresponding relation can be mated by target signature and realized.Matching characteristic commonly used has position, size and the shape of target and color etc., adopts the motion model of second order Kalman filter as target, the position of predicted motion target among the present invention.When mating, utilizing the image block coupling to come the position of accurate localizing objects to the moving target after the prediction and foreground target.
Following rule takes place generally all to have in the fire on the road, earlier on fire from the vehicle, reason on fire may be on the vehicle electrical equipment be short-circuited, cross overload for a long time drive with the collision of other objects or mutually acutely self reason such as friction cause, also may be that vehicle knocks into the back and waits its body reason to cause; When taking place, cause tunnel fire hazard the article of scale and the size that vehicle takes place what and loading of fire relevant, in general the overall dimension of vehicle vehicle number big more, that assemble is many more, the just many more harm possibilities that the tunnel is caused of combustible entrained on the vehicle are just big more, therefore in this patent, introduce one and estimate tunnel fire hazard density of infection index, the extent of injury when the connection gross area size of aforementioned calculation gained is weighed tunnel fire hazard and taken place, the extent of injury when being communicated with the generation of the gross area and fire is proportional, is represented by formula (21):
F
area-risk=K
area*TArea (21)
In the formula: TArea is the connection gross area in the omni-directional visual, K
AreaBe comburant area ratio coefficient, F
Area-riskThe extent of injury when taking place for fire;
For this long and narrow restricted clearance in tunnel, fire phenomena has himself characteristic, in the early stage of fire, carrying out along with burning, fire plume stream temperature raises, thereby be subjected to buoyancy function, obtain bigger buoyance lift kinetic energy, when arriving the tunnel vault since with the collision of ceiling, make the part buoyance lift kinetic energy of high-temperature flue gas be transformed into the kinetic energy of bottom horizontal flow sheet, when most of flue gas flows towards downstream, have the sub-fraction flue gas can produce the trend of Modelling of Flow with Recirculation, Huo Qu upstream incoming flow deficiency in draught the smoke backflow phenomenon will occur when overcoming the backflow trend of flue gas.From three dimensions, the eddy motion of hot driving action initiation fire plume in the cross section in tunnel, forced ventilation forces the lengthwise movement of plume, and the combination of these two kinds of motions has constituted tunnel fire hazard breathe one's last overall flow and distribution situation.Therefore the tunnel breaking out of fire is from the angle of vision, is earlier since a point (vehicle), then develop in whole cross section on, vertically spreading to the tunnel then.The visual law of holding this tunnel fire hazard helps to judge with computer the generation of tunnel fire hazard, and helps to distinguish the early, middle and late phase of fire, and is very great to the meaning of tunnel fire hazard control.To only detect among the present invention has flue gas on the point (vehicle) or has flame to regard as the early stage of fire, flue gas is arranged on the whole cross section or have flame to regard as the mid-term of fire detecting, and vertical K rice in tunnel is above to be the period of expansion of fire.
Fire stage in early days since flame from vehicle, grow out of nothing, a process that takes place to develop is arranged.The characteristics of image of this stage flame is very obvious, because incipient fire flame right and wrong are permanent, the shape of different flames constantly, area, radiation intensity or the like are all changing, and its flue gas occurs in the top of vehicle during to the vehicle fire that stops, and flue gas occurs in the going up of vehicle, rear during vehicle fire in travelling.Catch these characteristics of tunnel fire hazard to lay a good foundation for the EARLY RECOGNITION of tunnel fire hazard.The same with the vehicle condition in the above-mentioned monitoring tunnel, the image processing in the detection also is the continuous processing of dynamic image; To each tracking target on the image, determine their matching relationships according to above-mentioned employed target tracking algorism with target in the former frame, below be used image information in the image-type fire detecting method of the present invention:
1) area change: incipient fire is the constantly process of development of back fire of catching fire on vehicle.In this stage, to vertical spread, its area presents increase trend continuous, autgmentability from the edge of vehicle for the flue gas of vehicle on fire or flame.
2) edge variation: the edge variation of tunnel incipient fire flame has certain rules, and begins to produce flue gas or flame from vehicle edge.Utilize these characteristic quantities in early days the Changing Pattern of fire stage carry out fire and differentiate.
3) body changes: the body variation of incipient fire flame has reflected the variation of flame in spatial distribution.In the tunnel incipient fire stage, the change of shape of flame, spatial orientation change, the flowing etc. of the shake of flame and flame, have own unique Changing Pattern, principal character is for to develop on the cross section from point, and flue gas is expanded to the longitudinal direction in tunnel more then.In image processing, the body variation characteristic is by calculating the spatial characteristics of flame, and promptly the position between the pixel concerns and realizes.
4) layering changes: the flame temperature inside is uneven, and shows certain rules.Burning in the fire belongs to diffusion combustion, and diffusion combustion flame all has tangible hierarchical nature, can be divided into three layers of flame cores, internal flame, flame envelope as candle flame; Internal flame position when burning takes place the vehicle from the angle consideration tunnel of vision often appears at the vehicle edge position, and its top is accompanied by a large amount of flue gases.
5) whole moving: incipient fire flame is constantly the flame of development, along with old comburant after-flame and new comburant are lighted, if fire is to cause owing to some liquid comburants that vehicle rear-end collision, vehicle loading leak reason such as on fire, the continuous shift position of flame so.The appearance of this situation will be very big to the tunnel destruction.
Beneficial effect of the present invention mainly shows: 1, can realize comprehensive real-time security monitoring, in time catch the early sign of the vehicle fire in travelling so that take the measure of in time putting out a fire with the damage control that the condition of a disaster caused in minimum zone; In time find various traffic accidents, analyze, thereby can reduce the generation of traffic accident by modes such as control vehicle flowrates calculating traffic datas such as the vehicle flowrate that obtained, lane occupancy ratio, vehicle classification, the speed of a motor vehicle in the tunnel; 2, detection range is wide, can follow the tracks of detection at 200 rice diameters with interior vehicle to the orientation, and the number of track-lines that is detected is unrestricted substantially; 3, installation and maintenance are noiseless, because video detector is installed on the top, tunnel often, therefore installing and safeguarding does not need to close the track, does not need excavation yet, destroys the road surface; 4, low consumption easy to maintenance, traditional induction coil detector needs excavated pavement to safeguard when damaging, and during video detecting device generation problem, can directly extract or repair facility, and does not need to close track or excavated pavement, has reduced maintenance cost; 5, detected parameters is abundant, not only can detect tunnel fire hazard and traffic accident, can also detect basic traffic parameter such as the magnitude of traffic flow, speed, density, occupation rate, and can also detect queuing, drive in the wrong direction, parking, journey time, delay, fall-out, incident, crowded etc., this is that general induction coil detector is incomparable; 6, visuality can be passed to traffic administration person with omnibearing realtime graphic, realizes the function that monitors; 7, detecting reliability height can all weather operations, is not subjected to boisterous influences such as rain, snow; 8, accuracy in detection height, the precision of most of parameter detecting is more than 90%; 9, have good advance, extensibility, sustainable development, video detection technology is one of key technology of intelligent transportation system, itself just can become a system separately, can be connected with advanced person's Vehicle Information System, advanced traffic information inducement system, advanced intelligent transportation modules such as public traffic information system by network, realize more function.
(4) description of drawings
Fig. 1 reflexes to omni-directional visual planar imaging schematic diagram for three-dimensional space;
Fig. 2 is that the hardware of omnibearing vision sensor is formed schematic diagram;
Fig. 3 is the perspective projection imaging model schematic diagram of omnibearing vision device and general perspective imaging model equivalence;
Fig. 4 is the omnibearing vision device undeformed simulation schematic diagram of epigraph in the horizontal direction;
Fig. 5 is on the section perpendicular to the track direction in tunnel, the installation site of omnibearing vision sensor and visual range in vertical direction;
Fig. 6 is on the section of the track direction that is parallel to the tunnel, and the installation site of omnibearing vision sensor reaches visual range in vertical direction;
Fig. 7 is the flow chart of image processing in the omnibearing vision sensor.
Fig. 8 is the flow chart that calculates aspect such as tunnel safety in the omnibearing vision device.
(5) embodiment
Below in conjunction with accompanying drawing the present invention is further described.
With reference to accompanying drawing 1,2,3,4,5,6,7,8, a kind of intelligent tunnel safety monitoring apparatus based on omnidirectional computer vision, comprise microprocessor 6, be used to monitor the scene, tunnel video sensor 13, be used for and extraneous communication module of communicating by letter, described microprocessor 6 comprises:
View data read module 16 is used to read the video image information of coming from the video sensor biography; This module can the integrated image pretreatment module, and video streaming image is carried out preliminary treatment;
On-the-spot playing module 20 in real time is used to connect outside display device, and the on-site supervision picture is play in real time;
The output of described video sensor 13 communicates to connect by usb 14 and microprocessor 6,
Described video sensor is 13 omnibearing vision sensors, described vision sensor comprises the evagination catadioptric minute surface 1 in order to object in the reflection monitoring field, in order to the dark circles cone 2 that prevents that anaclasis and light are saturated, transparent cylinder 3, camera, described evagination catadioptric minute surface 1 is positioned at the top of transparent cylinder 3, evagination catadioptric minute surface 1 down, dark circles cone 2 is fixed on the center of catadioptric minute surface 1 male part, camera facing to the evagination mirror surface up, described camera is positioned at the virtual focus position of evagination mirror surface 1, and camera comprises CCD unit 5 and camera lens 4; Described microprocessor also comprises:
17 of transducer calibration moulds are used for the parameter of omnibearing vision sensor is demarcated, and set up the material picture in space and the corresponding relation of the video image that is obtained;
Image launches processing module 19, and the circular video image that is used for gathering expands into the panorama block diagram, according to a point (x on the circular omnidirectional images
*, y
*) and rectangle column panorama sketch on a point (x
*, y
*) corresponding relation, set up (x
*, y
*) and (x
*, y
*) mapping matrix, shown in the formula (1):
In the following formula,
Be mapping matrix,
Be the picture element matrix on the circular omnidirectional images,
It is the picture element matrix on the rectangle column panorama sketch;
The output that image launches processing module connects network transmission module 22, with video Data Transmission to network; And on display 21, show by real-time playing module 20;
Whether virtual detection trigger action detection module 23 is used to detect virtual trigger and moves, and judges whether new vehicle;
Color model modular converter 25, be used for color with each pixel of coloured image from the RGB color space conversion to (Cr, Cb) spatial color model;
For each thread 24 that has started, can be by judging based on the vehicle judge module 29 of grey level histogram or based on the vehicle judge module of YUV model;
If new thread is arranged, start a new thread 26, carry out judgements such as vehicle, the speed of a motor vehicle, and in preserving module 27, open up the current location that new internal memory comes all color of pixel of posting field and registration of vehicle edge;
Motion obj ect detection module 30 is used for from video flowing extract real-time target, adopts a plurality of ADAPTIVE MIXED Gauss models to represent to each picture point, establishes to be used for describing total K of each Gaussian Profile of putting distribution of color, is labeled as respectively:
η(Y
t,∑
t,i,∑
t,i),i=1,2,3…,k
Each Gaussian Profile has different weights respectively
With priority p
T, j=ω
T, i/ σ
I, j, σ wherein
I, jIt is the variance of each Gaussian Profile;
The order ordering from high to low of each Gaussian Profile priority is got and is decided background weights part and threshold value M, has only satisfied
Preceding several distribute and just are considered to background distributions, and other then is that prospect distributes;
When detecting the foreground point, according to priority order with Y
tMate one by one with each Gaussian Profile, if do not represent the Gaussian Profile and the Y of background distributions
tCoupling judges that then this point is the foreground point; Otherwise be background dot;
If do not find any Gaussian Profile and Y when detecting
tCoupling is then removed a Gaussian Profile of priority minimum, and according to Y
tIntroduce a new Gaussian Profile, and give less weights and bigger variance, all Gaussian Profile are carried out the weights normalized again; As if μ Gaussian Profile and Y
tCoupling, then as follows to the right value update formula of i Gaussian Profile,
Wherein α is the constant-right value update rate of expression context update speed, and formula (17) shows to have only and Y
tThe weights of the Gaussian Profile that is complementary just are improved, and the weights of other distribution all are lowered; The parameter of the Gaussian Profile that is complementary in addition, is also upgraded according to formula (18), (19);
μ
t,i=(1-α)μ
t,i+αY
t (18)
σ
t,i 2=(1-α)σ
t,i 2+α(Y
t-μ
t,i)
2 (19)
In the formula, μ
T, kBe K Gauss's brightness expectation, σ
T, kIt is K Gauss's brightness variance;
Behind the parameter of having upgraded Gaussian Profile and each distribution weights, also to recomputate priority and ordering to each distribution, and the number of definite background distributions;
Turnover rate to background dot, static foreground point, sport foreground point is treated with a certain discrimination, and the right value update rate in the formula (17) is changed to β, the turnover rate α with the Gaussian Profile parameter of having any different, and the right value update formula after the change is provided by formula (20):
Moving Object Segmentation module 31, be used for adopting the connected region detection algorithm to cut apart target according to spatial continuity, offset earlier except the prospect point set F after the background model and expand respectively and corrosion treatment, obtain expansion collection Fe and shrink collecting Fc, by handling resulting expansion collection Fe and shrinking the result that collection Fc can think initial prospect point set F is filled up aperture and removal isolated noise point; Therefore there is the following Fc of relation<F<Fe to set up, then, on expansion collection Fe, detects connected region to shrink collection Fc as starting point, then testing result is designated as { Rei, i=1,2,3 ..., n}, the connected region that will detect gained at last projects on the initial prospect point set F again, gets to the end communication with detection result { Ri=Rei ∩ F, i=1,2,3 ..., n};
After in described moving Object Segmentation module, being partitioned into the target area, extract the static nature of foreground target, comprising boundary rectangle size, area, length-width ratio, median point position, color projection histogram;
After vehicle safety monitoring or fire hazard monitoring finish, start and finish the whole monitor procedure of thread 33 end.
Intelligent tunnel safety monitoring apparatus based on omnidirectional computer vision of the present invention mainly is made of three subsystems, is respectively image-type fire alarm subsystem, image-type traffic data measurement subsystem and image-type traffic accident detection subsystem; In these three subsystems image obtain and level image to handle all be identical, as shown in Figure 7, promptly after extracting foreground target and moving target, add different separately algorithms again, realize the detection to tunnel fire hazard, the detection of traffic data and the detection of traffic accident respectively;
Obtaining of image is to obtain the dynamic video image in certain circumference scope in the tunnel by the omnibearing vision sensor that is installed in tunnel top central authorities, described certain circumference scope is relevant with the folding emission minute surface design of omnibearing vision sensor, at first select for use CCD (CMOS) device and imaging len to constitute camera in the design, preresearch estimates system overall dimension on the basis that the camera inner parameter is demarcated is determined the mirror surface shape parameter according to the visual field of short transverse then.
The video image bottom layer treatment, be to have obtained tracking and the follow-up high-rise necessary image preliminary treatment of handling in order to realize moving target behind the omni-directional visual image according to omnibearing vision sensor, as shown in Figure 7, can from the omni-directional visual image, be partitioned into foreground image by the image preliminary treatment, in this flow process, carry out the foundation and the renewal of background model earlier, obtain the prospect point set by background subtraction, carrying out target then cuts apart, after being partitioned into the target area, can extract the static nature of foreground target, comprise the boundary rectangle size, area, length-width ratio, the median point position, color projection histogram etc.; Be the ADAPTIVE MIXED Gauss model to the foundation of background model with upgrading what adopt among the present invention.
The motion obj ect detection module is used for from video flowing extract real-time target, adopts a plurality of ADAPTIVE MIXED Gauss models to represent to each picture point, establishes to be used for describing total K of each Gaussian Profile of putting distribution of color, is labeled as respectively:
η(Y
t,μ
t,i,∑
t,i),i=1,2,3…,k
Each Gaussian Profile has different weights respectively
With priority p
T, j=ω
T, i/ σ
I, j, σ wherein
I, jIt is the variance of each Gaussian Profile;
They get suitable surely background weights part and threshold value according to priority order ordering from high to low, have only satisfied
Before several distribute and just to be considered to background distributions, wherein M be a pre-set threshold within this threshold value, other then is the prospect distribution; When detecting the foreground point, according to priority order with Y
tMate one by one with each Gaussian Profile, if do not represent the Gaussian Profile and the Y of background distributions
tCoupling is judged that then this point is the foreground point, otherwise is background dot.The renewal of many Gaussian Profile background model is comparatively complicated.Because it not only will upgrade the parameter of Gaussian Profile self, also to upgrade each weight that distributes, priority etc.If do not find any Gaussian Profile and Y when detecting
tCoupling is then removed a Gaussian Profile of priority minimum, and according to Y
tIntroduce a new Gaussian Profile, and give less weights and bigger variance, then all Gaussian Profile are carried out the weights normalized again.As if μ Gaussian Profile and Yt coupling, then as follows to the right value update formula of i Gaussian Profile,
Wherein α is the constant-right value update rate of expression context update speed, and formula (17) shows to have only and x
tThe weights of the Gaussian Profile that is complementary just are improved, and the weights of other distribution all are lowered.The parameter of the Gaussian Profile that is complementary is in addition also upgraded according to formula (18), (19); Behind the parameter of having upgraded Gaussian Profile and each distribution weights, also to recomputate priority and ordering to each distribution, and the number of definite background distributions.
μ
t,i=(1-α)μ
t,i+αY
t (18)
σ
t,i 2=(1-α)σ
t,i 2+α(Y
t-μ
t,i)
2 (19)
In the formula, μ
T, kBe K Gauss's brightness expectation, σ
T, kBe K Gauss's brightness variance,
The update strategy of background model is the technology of most critical during background model realizes among the present invention, note following two principle when model modification: 1) background model wants enough fast to the response speed of change of background, the variation of background may be the variation of the background intrinsic colour that caused by factors such as illumination variation, also may be the variation of background area, background model must be caught up with the variation of real background rapidly; 2) background model will have stronger antijamming capability to moving target, and each point of background model all has been subjected to the training of a colour sequential in the renewal process of background model, and described training is at static background rather than moving target; Satisfying above-mentioned two principle way of the present invention is background model to be upgraded combine with the tracking results of back, give background dot and the static bigger turnover rate in foreground point (static target), and give the less turnover rate in foreground point (moving target) of motion; The result that this way is actually with motion target tracking instructs renewal, reaches the purpose that responds the variation of background when the protection background model is not influenced by moving target rapidly.
In order to satisfy above-mentioned two principle, adopted the background model of many Gaussian Profile among the present invention, because judgement prospect, background also not merely depend on certain Gaussian Profile in the background model of a plurality of Gaussian Profile, more depend on the weights and the priority of each distribution; Adopted the weights and the priority update strategy of each distribution in addition, Gauss's parameter of the Gaussian Profile that only is complementary just obtains upgrading.This way is subjected to the interference of moving object neither be so serious, but still come with some shortcomings, such as the situation of static target not being done special the processing, do not consider yet the slow moving target of the response speed of change of background is still brought the cavity easily, for this reason in the present invention to background dot, static foreground point, the turnover rate of sport foreground point is treated with a certain discrimination, right value update rate in the formula (17) is changed to β, turnover rate α with the Gaussian Profile parameter has any different, right value update formula after the change is provided by formula (20)
Moving Object Segmentation is to adopt the connected region detection algorithm to cut apart target according to spatial continuity, it is very big that but connected region is cut apart the noise effect that is subjected in the primary data, generally need carry out denoising earlier, denoising can realize by morphology operations, utilize corrosion and expansion operator to remove isolated noise foreground point and the aperture of filling up the target area respectively among the present invention, specific practice is: offset earlier except the prospect point set F after the background model and expand respectively and corrosion treatment, obtain expansion collection Fe and shrink collecting Fc, by handling resulting expansion collection Fe and shrinking the result that collection Fc can think initial prospect point set F is filled up aperture and removal isolated noise point.Therefore, there is the following Fc of relation<F<Fe to set up, then, on expansion collection Fe, detect connected region, then testing result is designated as Rei to shrink collection Fc as starting point, i=1,2,3 ... n}, the connected region that will detect gained at last projects on the initial prospect point set F again, gets to the end communication with detection result { Ri=Rei ∩ F, i=1,2,3 ..., n}, can keep the integrality of target also to avoid the influence of noise foreground point simultaneously by this target partitioning algorithm, also keep the edge details part of target.Just can extract the static nature of foreground target after in above-mentioned moving Object Segmentation, being partitioned into the target area, comprising boundary rectangle size, area, length-width ratio, median point position, color projection histogram etc., these features are the whether necessary information of breaking out of fire of following identification vehicle and vehicle.
Detecting in the tunnel transport information and traffic events need follow the tracks of above-mentioned resulting foreground target, thereby determine the movement locus of each moving target, key point is to set up corresponding relation between the static foreground target that detects gained and the dynamic motion target (vehicle) of being followed the tracks of here.The foundation of this corresponding relation can be mated by target signature and realized, matching characteristic at present commonly used has position, size and the shape of target and color etc., adopts second order Kalman filter to come the position of predicted motion target as the motion model of target in the present invention.When mating, utilizing the image block coupling to come the position of accurate localizing objects to the moving target after the prediction and foreground target.Yet operating vehicle can occur blocking mutually from visual angle in the tunnel, therefore is necessary that the corresponding relation between moving target and foreground target is divided into several classes to be handled respectively.Because omnibearing vision sensor is mounted in the top in tunnel, complicated occlusion issue can not take place from visual angle, therefore in the present invention this corresponding relation is divided into following 5 classes; 1) (since 0 to 1) appears in moving target: a new moving target occurs, this moving target of initialization, set the initial target weights, and merge/separation flags is set to 0; 2) moving target disappears (from 1 to 0): a moving target disappears, and upgrades with this target of prediction, reduce the weights of this target, and merge/separation flags is set to 0; 3) desirable follow the tracks of (from 1 to 1) of moving target: the normal tracking of not blocking, upgrade this moving target with corresponding foreground target, and it is in and moves or inactive state according to its speed judgement, and merge/separation flags is set to 0; 4) moving target merges (from n to 1): the situation of blocking appears in a plurality of targets each other, merging/the separation flags of each moving target is increased 1, if these target velocities are close and merging/separation flags greater than K2, then they are merged into a big moving target, by this foreground target initialization one new moving target, and inherit the behavioral characteristics of original many moving targets; Otherwise each target is still upgraded with the predicting the outcome of revising of respectively hanging oneself; 5) moving target separates (from 1 to n): with merge opposite process, merging/the separation flags of each moving target is subtracted 1, if merging/separation flags is less than K1, this moving target is divided into a plurality of little moving targets, by a plurality of moving targets of these foreground target initialization, and inherit the behavioral characteristics of former target; Otherwise with these foreground targets be merged into a big foreground target and with this moving target of renewal.
Above-mentioned target weights are mainly used to represent the reliability of moving target, the more ground little scraps of paper in tunnel, Litters such as polybag all might become tracked moving target, the present invention mainly detects is vehicle in the tunnel, has certain rules from the boundary rectangle size of outward appearance vehicle, therefore the moving target of following the tracks of can be represented its reliability with the mode that quantizes, from high to low moving target is divided into 1) reliable moving target of visual target-expression and boundary rectangle size, area, length-width ratio meets the scope of type of vehicle substantially, participates in object matching; 2) moving target-insecure moving target does not meet the condition of minimum vehicle though these targets can be moved from boundary rectangle size, area, length-width ratio, but participates in object matching yet; 3) the boundary rectangle size of static inertia target-target, area, length-width ratio meet the scope of type of vehicle substantially, be transformed into static inertia target and do not have other moving targets from original visual target, can consider that this static inertia target (vehicle) breaks down in this non-moving target front; Another kind of situation is to have static inertia target to occur suddenly, may be thought of as the article of losing on the visual target (vehicle) of front.Therefore from the tunnel safety angle, the fail safe in the high more tunnel of the reliability of moving target is also high more, if static inertia target occurs or insecure moving target all can make the tunnel safety risk increase on the road surface in tunnel.Have only and in the continuous several frames of this moving target, occur just thinking reliable for participating in the object matching moving target, equally also will in continuous several frames, disappear and just think certain disappearance.
The area of flame variation characteristic is judged, it is the rule of increase trend of utilizing continuous, the autgmentability of area of flame, that obtain its area Si according to above-mentioned each connected region and judge that whether the flue gas area of flame is the increase in autgmentability, flue gas area of flame Si by every two field picture in this patent carries out recursion calculating, asks the recursion value in the flue gas area of flame of next frame image
Computing formula is provided by formula (22);
In the formula,
Be the recurrence average value of the flue gas area of flame of next frame image,
Be the recurrence average value of the flue gas area of flame of present frame image, Si is the calculated value of present frame flue gas area of flame, and K is a coefficient, less than 1.Calculate the increase trend that shows autgmentability in time with formula among the present invention,
If above-mentioned inequality (23) is set up expression increase trend is arranged, reflected that area of flame showing the increase trend of autgmentability in time, expands W with area of flame
Fire areaQuantized value is taken as 1, so quantized value is 1 to show that area of flame has autgmentability, and 0 expression does not have autgmentability.
Described layering variation characteristic is judged, vehicle fire tends to when taking place be accompanied by and produces a large amount of black flue gases, near the car body position again often near the center of flame, it above car body the flue gas that produces in the burning, therefore flame core, internal flame, flame envelope in the time of can utilizing the YCrCb color space to vehicle fire are discerned
Conversion formula from the RGB color space to the YCrCb color space (24) provides,
Y=0.29990*R+0.5870*G+0.1140*B
Cr=0.5000*R-0.4187*G-0.0813*B+128 (24)
Cb=-0.1787*R-0.3313*G+0.5000*B+128
Then according to flame image in that (whether the light emitting source that calculates the car body edge drops on flame image at (Cr for Cr, Cb) spatial distributions model, Cb) in the spatial distributions model, be used as judging an important evidence of flame point, computing formula is provided by formula (25)
In the formula (25)
Be the sample standard average of flame point Cr, Cb, A, B, C are respectively the coefficients that is come out by sample standard deviation and mean value computation.
The sample standard average of Cr, the Cb of flue gas in equally also can constantly being risen, A, B, C also judge whether it is flue gas with formula (25).
It is to have very obvious characteristics according to the tunnel incipient fire that the edge variation feature is judged, because the outline edge comparison rule of the vehicle that does not have an accident is consistency, visual angle (what adopt the present invention is the comprehensive design of catadioptric that horizontal scene does not have distortion) from omnibearing vision sensor, the top view in tunnel mainly can detect the length of vehicle and wide, therefore the length-width ratio of the outline edge of vehicle is a very important characteristic parameter, and the length and width on the outline edge of vehicle are all approaching and straight line, shape is just as rectangle, so here auto model is handled as a simple rectangular model; Can carry out area relatively with the rectangle that is just in time containing this connected region by the resulting connected region of aforementioned calculation and one for this reason, calculate its ratio size with formula (26);
Area
i RateThe area ratio rate of expression T certain tracking target constantly, this value is big more to show that detected vehicle is more near rectangle, otherwise more little this detected vehicle that just shows departs from rectangular model more, introduce simultaneously area here and reflect the edge variation situation of tracing object, if the area that is calculated for K time has the trend of diminishing than rate of change or less than a threshold value k continuously than rate of change
Area, just thinking has edge variation, and it judges that relational expression is shown in (27);
The body variation characteristic judges it mainly is the rule that takes place and develop according to tunnel fire hazard, the vehicle that travels in the tunnel can be simplified to a cuboid, body just can take place the vehicle that catches fire when having only breaking out of fire changes, be varied to complicated body, therefore can with a rectangle be communicated with the area district and carry out matching ratio more, mate calculating with formula (26), matching similarity is more for a short time to show that body changes more greatly and the probability of breaking out of fire is just high more, just can affirm breaking out of fire when body develops into whole tunnel cross section; When body shows that fire spreads during to the longitudinal development in tunnel.
Whole moving characteristic is judged can determine fire spread more, when in the layering variation characteristic is judged, detecting many places and existing the situation of flame point and exist the flame motion track, described flame motion track be with the tracking of above-mentioned vehicle monitoring in same algorithm obtain.
On the basis that five kinds of above-mentioned flames are judged, then comprehensively judge to reduce the disconnected rate of erroneous judgement, can judge the degree of fire simultaneously, the weighted comprehensive judgement is calculated and is carried out in module 33, comprehensive judgment formula is provided by formula (28), has adopted weighting scheme in the comprehensive judgement
W
Fire alarm=K
Fire pattern* W
Fire pattern+ K
Fire color* W
Fire color+ K
Fire move* W
Fire move+ K
Fire area* W
Fire area+ K
Fire body* W
Fire body(28) in the formula:
K
Fire patternWeight coefficient for the edge variation feature.
K
Fire colorWeight coefficient for the layering variation characteristic.
K
Fire moveWeight coefficient for whole moving characteristic.
K
Fire areaWeight coefficient for area change.
K
Fire bodyWeight coefficient for the body variation.
The volume of traffic judges that be meant judgement in the unit interval, by the vehicle number of a certain place of road, a certain section or a certain road, it and average speed, vehicle density are referred to as the traffic flow three elements, formula (29) expression of its relational expression,
Q=V*K (29)
Wherein, Q is the magnitude of traffic flow, and V is a section mean speed, and K is a vehicle density.
Average speed can calculate with two kinds of methods, i.e. time average speed and interval average speed.Time mean speed is meant in specific time, interval, the arithmetic mean of all the vehicle spot speed by a certain place of road.Interval average speed is meant in specific time interval, occupies the speed average of all vehicles of certain road interval.
Measure that the interval average speed of vehicle can obtain with aforesaid way on certain track, at first in the detection range of video, draw two virtual triggering lines that the interval total length of being separated by is X perpendicular to the track, all can trigger the generation of an incident during the virtual triggering line of each car by this track, when beginning to enter virtual triggering line, vehicle starts image tracing, and give this object enclose one the zero hour temporal information, end object is followed the tracks of when vehicle enters the virtual triggering line of next bar, give simultaneously this object enclose one the finish time temporal information, and then carry out vehicle and judge the vehicle information that obtains, the tracking vehicle that obtains at last will have 4 kinds of information, it is lane number, the zero hour, the finish time and vehicle, the lane number right lateral is even numbers, left lateral is an odd number, from left to right with the incremental mode automatic editing; Ambiguity with human eye judges that vehicle can roughly be divided into dolly, as car, minibus (minibus); Middle car is as mini-bus (19 to 40), lorry (2.5 tons to 7.5 tons); Cart is as big bus, lorry (7.5 tons to 15 tons); Super-huge car, as lorry (15 tons to 40 tons), towed vehicle, such classification can be satisfied the demand of traffic administration basically, and is with being divided into of vehicle of following 5 classes 2 types, as shown in the table in the present invention;
Utilize the resulting rectangle that is containing this connected region of formula (26) calculating to carry out the area size and can distinguish especially big, large, medium and small car and motorcycle, utilize statistics to obtain the area list factor membership function curve that type of vehicle is concentrated, with the one-level grader big or small vehicle is identified then, its relational expression is represented with formula (30)
The especially big car of FirstClass={{ } (A), { cart } (B), { middle car } (C), { dolly } (D), { motorcycle } (E) (30)
Above-mentioned one-level grader has been distinguished especially big, large, medium and small car and motorcycle, but to cart, is that big bus or lorry can't be judged; Equally also there are these decision problems in centering car and dolly; The secondary classification device will address this problem exactly.The feature difference of passenger vehicle and lorry maximum is, the straight line comparison rule of the ceiling of passenger vehicle and vehicle external form length-width ratio are in certain scope, and lorry does not generally have ceiling, edge unevenness in view of outward appearance, therefore utilize statistics to obtain the single factor membership function of the length-width ratio curve that type of vehicle is concentrated, and the detection of assistant edge linearity identifies passenger vehicle and lorry formula (31) expression of its relational expression
The SecondClass={{ passenger vehicle } (K), { lorry } (H) (31)
Utilize some characteristics of omni-directional visual, beyond some scopes, (be referred to as the long sight angular region among the present invention) and can capture the height and the amplitude characteristic of moving object, and can capture the amplitude and the length characteristic of moving object with interior (being referred to as near-sighted angular region among the present invention) in some scopes; Therefore can obtain the height of vehicle and the ratio of amplitude in long sight angle situation, obtain the ratio of length with the amplitude of vehicle, just can obtain length and value highly as long as know the value of amplitude in near-sighted angle situation.
The object of arranging here to be followed the tracks of is expressed by following mode, 0bject (RoadwayNo, StartTime, EndTime, CarType), in case enter during fixed virtual detection line, start to produce a new object, simultaneously with lane number RoadwayNo and the zero hour StartTime assignment give this object, then this object is followed the tracks of, when this object touches next bar virtual detection line, give this object with concluding time EndTime assignment, and class of vehicle is carried out discriminator, calculate resulting type of vehicle CarType assignment and give this object, with that can the computation interval average speed, computing formula is provided by formula (28);
In the formula: X is the total length of being separated by perpendicular to two dummy line of track picture,
Interval average speed for this object; The interval average speed that some vehicles on some tracks have been arranged just can access the interval average speed of vehicle on some tracks by formula (29);
Also can access the interval average speed of vehicle on the runway of the left and right sides by formula (30);
Formula (30) also can through type (29) the result obtain, just the interval average speed addition of the track vehicle of all odd numbers is removed last odd number number of track-lines and obtain the interval average speed of vehicle on the left lateral track, equally the interval average speed addition of the track vehicle of all even numbers is removed last even number number of track-lines and obtain the interval average speed of vehicle on the right lateral track.
Described lane occupancy ratio is divided into space occupancy and time occupancy.Space occupancy is to record all vehicles occupy on the known detection highway section the length ratio with road section length in a flash, representing with Rs;
Time occupancy is meant that in the unit interval vehicle is represented with Rt by the cumulative time of a certain section and the ratio of unit minute, can be calculated by through type (28)
Keep certain gap that continues between vehicle operating and be very important, suitable gap is intended for the foundation of the traffic control in tunnel, is the safety measure that prevents vehicle rear-end collision.So traffic density is just extremely important, lane occupancy ratio can be used for describing density.Lane occupancy ratio is high more, and traffic density is big more.
Fleet blocks up and detects and can be judged according to the velocity information that obtains, as the traffic with the tunnel can be divided into very unimpeded, unimpeded,, several situations such as crowded, crowded, obstruction, free-flow speed is 70km/h in the tunnel, when being 60~70km/h, can think that then this tunnel segment is very unimpeded by the resulting average speed of aforementioned calculation; During 50~60km/h, then can think unimpeded; During 40~50km/h, can think that this highway section is more unimpeded; During 30~40km/h, can think that the unimpeded situation in this highway section is general; During 20~30km/h, can think that this highway section is more crowded; During 10~20km/h, can think that this highway section is crowded; Less than 10km/h even be 0, then can think this road congestion.
Drive in the wrong direction and to judge being lane number RoadwayNo according to each detected tracing object with line more, vehicle drives in the wrong direction two kinds of situations, a kind of situation is to change traveling lane in the vision monitoring scope, such as should be in the diversion of traffic on the traveling lane to the right on direction running track left; Another kind of situation is to be reverse driving just when entering the vision monitoring scope; Can confirm whether to exist retrograde by checking the lane number RoadwayNo that follows the tracks of vehicle for first kind of situation, the method of judging is exactly that the lane number RoadwayNo lane number RoadwayNo information entrained with following the tracks of vehicle is carried out odd-even check, if one be odd number and another to be even number so just be judged to be is retrograde; For a kind of reverse driving in back judge main according to a lane number RoadwayNo information object objects trigger of not carrying the movement locus of next bar virtual detection line and this destination object opposite with the road driving direction.Judge it also is to be undertaken for getting over line by having occurred following the tracks of the entrained lane number RoadwayNo information inconsistency of vehicle on the lane number RoadwayNo.
Traffic accident, parking offense and hypervelocity are judged, can calculate resulting velocity information according to formula (28) and detect, and be hypervelocity if the average speed that calculates just can be regarded as above the tunnel speed limit; If institute's tracking target object does not have motion within a certain period of time and on this track the fleet in time period block up detect for unimpeded situation just regard as more than general this tracking target to as if parking offense; If a plurality of destination objects of following the tracks of do not have motion within a certain period of time and on one or above track the fleet in time period block up to detect and just think more than general traffic accident might take place to unimpeded situation, the then fleet's jam situation rapid deterioration in time period on or the above track as time passes, the slope that fleet's jam situation changes surpass certain threshold value explanation the traffic accident probability takes place to be increased.
Microprocessor 15 is embedded systems, and the implementation algorithm among the present invention is realized by Java language.
Claims (6)
1, a kind of intelligent tunnel safety monitoring apparatus based on omnidirectional computer vision, described intelligent tunnel safety monitoring apparatus comprises microprocessor, is used to monitor the video sensor at scene, tunnel, is used for and extraneous communication module of communicating by letter, and described microprocessor comprises:
The view data read module is used to read the video image information of coming from the video sensor biography;
File storage module, the storage that is used for video sensor is gathered is to memory;
On-the-spot playing module in real time is used to connect outside display device, and the on-site supervision picture is play in real time;
The output of described video sensor is connected with microprocessor communication, it is characterized in that:
Described video sensor is an omnibearing vision sensor, described vision sensor comprises in order to the evagination catadioptric minute surface of object in the reflection monitoring field, dark circles cone, transparent cylinder, the camera in order to prevent that anaclasis and light are saturated, described evagination catadioptric minute surface is positioned at the top of transparent cylinder, evagination catadioptric minute surface down, the dark circles cone is fixed on the center of evagination catadioptric minute surface male part, camera faces toward evagination catadioptric minute surface up, and described camera is positioned at the virtual focus position of evagination catadioptric minute surface;
Described microprocessor also comprises:
The transducer calibration module is used for the parameter of omnibearing vision sensor is demarcated, and sets up the material picture in space and the corresponding relation of the video image that is obtained;
Image launches processing module, and the circular video image that is used for gathering expands into the panorama block diagram, according to a point (x on the circular omnidirectional images
*, y
*) and rectangle column panorama sketch on a point (x
*, y
*) corresponding relation, set up (x
*, y
*) and (x
*, y
*) mapping matrix, shown in the formula (1):
P
**(x
**,y
**)←M×P
*(x
*,y
*) (1)
In the following formula, M is a mapping matrix, P
*(x
*, y
*) be the picture element matrix on the circular omnidirectional images, P
*(x
*, y
*) be the picture element matrix on the rectangle column panorama sketch;
The color model modular converter, be used for color with each pixel of coloured image from the RGB color space conversion to the YCrCb color space;
The motion obj ect detection module is used for from video flowing extract real-time target, adopts a plurality of ADAPTIVE MIXED Gauss models to represent to each picture point, establishes to be used for describing total K of each Gaussian Profile of putting distribution of color, is labeled as respectively:
η(Y
t,μ
t,i,∑
t,i),i=1,2,3…,k
Each Gaussian Profile has different weights ω respectively
T, i, and
With priority p
T, j=ω
T, i/ σ
I, j, σ wherein
I, jIt is the variance of each Gaussian Profile; Y
tBe illustrated in t the Y value of some pixels, i.e. Y component among the YCrCb constantly;
The order ordering from high to low of each Gaussian Profile priority is got and is decided background weights part and threshold value M, has only satisfied
Preceding several distribute and just are considered to background distributions, and other then is that prospect distributes;
When detecting the foreground point, according to priority order with Y
tMate one by one with each Gaussian Profile, if do not represent the Gaussian Profile and the Y of background distributions
tCoupling judges that then this point is the foreground point; Otherwise be background dot;
If do not find any Gaussian Profile and Y when detecting
tCoupling is then removed a Gaussian Profile of priority minimum, and according to Y
tIntroduce a new Gaussian Profile, and give less weights and bigger variance, all Gaussian Profile are carried out the weights normalized again; As if μ Gaussian Profile and Y
tCoupling, then as follows to the right value update formula of i Gaussian Profile,
Wherein α is the constant-right value update rate of expression context update speed, and formula (17) shows to have only and Y
tThe weights of the Gaussian Profile that is complementary just are improved, and the weights of other distribution all are lowered; The parameter of the Gaussian Profile that is complementary in addition, is also upgraded according to formula (18), (19);
μ
t,i=(1-α)μ
t,i+αY
t (18)
σ
t,i 2=(1-α)σ
t,i 2+α(Y
t-μ
t,i)
2 (19)
In the formula, μ
T, kBe K Gauss's brightness expectation, σ
T, kIt is K Gauss's brightness variance;
Behind the parameter of having upgraded Gaussian Profile and each distribution weights, also to recomputate priority and ordering to each distribution, and the number of definite background distributions;
Turnover rate to background dot, static foreground point, sport foreground point is treated with a certain discrimination, and the right value update rate in the formula (17) is changed to β, the turnover rate α with the Gaussian Profile parameter of having any different, and the right value update formula after the change is provided by formula (20):
The moving Object Segmentation module, be used for adopting the connected region detection algorithm to cut apart target according to spatial continuity, offset earlier except the prospect point set F after the background model and expand respectively and corrosion treatment, obtain expansion collection Fe and shrink collecting Fc, by handling resulting expansion collection Fe and shrinking the result that collection Fc thinks initial prospect point set F is filled up aperture and removal isolated noise point; Therefore there is the following Fc of relation<F<Fe to set up, then, on expansion collection Fe, detects connected region to shrink collection Fc as starting point, then testing result is designated as { Rei, i=1,2,3 ..., n}, the connected region that will detect gained at last projects on the initial prospect point set F again, gets to the end communication with detection result { Ri=Rei ∩ F, i=1,2,3 ..., n};
After in described moving Object Segmentation module, being partitioned into the target area, extract the static nature of foreground target, comprising boundary rectangle size, area, length-width ratio, median point position, color projection histogram;
Target tracking module, be used to adopt the motion model of second order Kalman filter as target, the position of predicted motion target, to the moving target after the prediction and foreground target when mating, utilize image block coupling to come the position of accurate localizing objects, set up static foreground target and the dynamic motion target of being followed the tracks of between corresponding relation.
2, the intelligent tunnel safety monitoring apparatus based on omnidirectional computer vision as claimed in claim 1, it is characterized in that: described microprocessor also comprises the fire judge module, described fire judge module comprises:
Area of flame variation characteristic judging unit, be used to utilize the rule of the increase trend continuous, autgmentability of area of flame, that obtain its area Si according to above-mentioned each connected region and judge that whether the flue gas area of flame is the increase in autgmentability, flue gas area of flame Si by every two field picture carries out recursion calculating, asks the recursion value S in the flue gas area of flame of next frame image
i T (i+1), computing formula is provided by formula (22);
S
i t(i+1)=(1-k)*S
i t(i)+k*S
i (22)
In the formula, S
i T (i+1)Be the recurrence average value of the flue gas area of flame of next frame image, S
i T (i)Be the recurrence average value of the flue gas area of flame of present frame image, Si is the calculated value of present frame flue gas area of flame, and k is a coefficient, less than 1, calculates the increase trend that shows autgmentability in time with formula,
If above-mentioned inequality (23) is set up expression increase trend is arranged, reflected that area of flame is showing the increase trend of autgmentability in time, area of flame expanded Wfire area quantized value be taken as 1, so quantized value is 1 to show that area of flame has autgmentability that 0 expression does not have autgmentability;
Layering variation characteristic judging unit, be used for tending to when taking place be accompanied by and produce a large amount of black flue gases according to vehicle fire, near the car body position again often near the center of flame, it above car body the flue gas that produces in the burning, flame core, internal flame, flame envelope when therefore utilizing the YCrCb color space to vehicle fire are discerned
Conversion formula from the RGB color space to the YCrCb color space (24) provides,
Y=0.29990*R+0.5870*G+0.1140*B (24)
Cr=0.5000*R-0.4187*G-0.0813*B+128
Cb=-0.1787*R-0.3313*G+0.5000*B+128
Then according to the distributed model of flame image at the YCrCb color space, whether the light emitting source that calculates the car body edge drops on flame image in the distributed model of YCrCb color space, be used as judging an important evidence of flame point, computing formula is provided by formula (25)
Cr in the formula (25), Cb is the sample standard average of flame point Cr, Cb, A, B, C are respectively the coefficients that is come out by sample standard deviation and mean value computation;
Cr=144.6;Cb=117.5;A=3.7*10
-3;B=4.1*10
-3;C=4.5*10
-3
The sample standard average of Cr, the Cb of flue gas in equally also constantly being risen, and judge whether it is flue gas with formula (25);
Edge variation feature judge module, be used for having very obvious characteristics according to the tunnel incipient fire, because the outline edge comparison rule of the vehicle that does not have an accident is consistency, visual angle from omnibearing vision sensor, the top view in tunnel mainly can detect the length of vehicle and wide, auto model is handled as a simple rectangular model, carry out area relatively by resulting connected region of aforementioned calculation and the rectangle that is containing this connected region, calculate its ratio size with formula (26);
Area
i RateThe area ratio rate of expression T certain tracking target constantly, this value is big more to show that detected vehicle is more near rectangle, otherwise more little this detected vehicle that just shows departs from rectangular model more, introduce area simultaneously and reflect the edge variation situation of tracing object, if the area that is calculated for k time has the trend of diminishing than rate of change or less than a threshold value k continuously than rate of change
Area, just thinking has edge variation, and it judges that relational expression is shown in (27):
Body variation characteristic judging unit, be used for rule according to tunnel fire hazard generation and development, the vehicle that travels in the tunnel can be simplified to a cuboid, body just can take place the vehicle that catches fire when having only breaking out of fire changes, be varied to complicated body, therefore with a rectangle be communicated with surface area and carry out matching ratio, mate calculating with formula (26), matching similarity is more for a short time to show that body changes more greatly and the probability of breaking out of fire is just high more, just breaking out of fire certainly when body develops into whole tunnel cross section; When body shows that fire spreads during to the longitudinal development in tunnel;
Whole moving characteristic judging unit is used for judging when the judgement of layering variation characteristic detects many places and exists the situation of flame point and exists flame motion track, W
Fire moveBe 1, otherwise be 0;
Comprehensive judging unit, be used for basis in five kinds of above-mentioned flames judgements, then comprehensively judge to reduce the disconnected rate of erroneous judgement, can judge simultaneously the degree of fire, the weighted comprehensive judgement is calculated and is carried out in module 33, comprehensive judgment formula is provided by formula (28), has adopted weighting scheme in the comprehensive judgement
W
fire?aiarm=K
fire?pattern×W
fire?pattern+K
fire?color×W
fire?color+K
fire?move×W
fire?move+K
fire?area×W
fire?area+K
fire?body×W
fire?body
(28)
In the formula:
K
Fire patternWeight coefficient for the edge variation feature;
K
Fire colorWeight coefficient for the layering variation characteristic;
K
Fire moveWeight coefficient for whole moving characteristic;
K
Fire areaWeight coefficient for area change;
K
Fire bodyWeight coefficient for the body variation.
3, the intelligent tunnel safety monitoring apparatus based on omnidirectional computer vision as claimed in claim 1 or 2, it is characterized in that: described microprocessor also comprises:
Volume of traffic judge module was used in the unit interval, calculated the vehicle number by a certain place of road, a certain section or a certain road, and its relational expression is represented with formula (29):
Q=V*K (29)
Wherein, Q is the magnitude of traffic flow, and V is a section mean speed, and K is a vehicle density;
The average speed computing module, be used for average speed and interval average speed computing time, in case enter during fixed virtual detection line, start to produce a new object, simultaneously with lane number RoadwayNo and the zero hour StartTime assignment give this object, then this object is followed the tracks of, when this object touches next bar virtual detection line, give this object with concluding time EndTime assignment, the computation interval average speed, computing formula is provided by formula (28);
V
i=(EndTime-StartTime)/X (28)
In the formula: X is for drawing the total length of being separated by of two dummy line, V perpendicular to the track
iInterval average speed for this object;
The interval average speed that some vehicles on some tracks have been arranged just can access the interval average speed of vehicle on some tracks by formula (29);
The lane occupancy ratio computing module is used to calculate vehicle space occupation rate and time occupancy.Space occupancy is to record all vehicles occupy on the known detection highway section the length ratio with road section length in a flash, representing with Rs;
Time occupancy is meant that in the unit interval vehicle is represented with Rt by the cumulative time of a certain section and the ratio of unit minute, can be calculated by through type (28)
4, the intelligent tunnel safety monitoring apparatus based on omnidirectional computer vision as claimed in claim 3, it is characterized in that: described microprocessor also comprises:
Fleet's detection module that blocks up, be used for being judged according to the velocity information that obtains, with the traffic in tunnel be divided into very unimpeded, unimpeded, generally, crowded, crowded, stop up several situations, free-flow speed is 70km/h in the tunnel, when being 60~70km/h, think that then this tunnel segment is very unimpeded by the resulting average speed of aforementioned calculation; During 50~60km/h, then think unimpeded; During 40~50km/h, think that this highway section is more unimpeded; During 30~40km/h, think that the unimpeded situation in this highway section is general; During 20~30km/h, think that this highway section is more crowded; During 10~20km/h, think that this highway section is crowded; Less than 10km/h even be 0, then think this road congestion.
5, the intelligent tunnel safety monitoring apparatus based on omnidirectional computer vision as claimed in claim 3, it is characterized in that: described microprocessor also comprises:
Drive in the wrong direction and line judge module more, be used for lane number RoadwayNo according to each detected tracing object, by checking the lane number RoadwayNo that follows the tracks of vehicle confirms whether to exist retrograde, the method of judging is exactly that the lane number RoadwayNo lane number RoadwayNo information entrained with following the tracks of vehicle is carried out odd-even check, if one be odd number and another to be even number so just be judged to be is retrograde; According to a lane number RoadwayNo information object objects trigger of not carrying the movement locus of next bar virtual detection line and this destination object opposite with the road driving direction; As by having occurred following the tracks of the entrained lane number RoadwayNo information inconsistency of vehicle on the lane number RoadwayNo, be judged to be vehicle and get over line.
6, the intelligent tunnel safety monitoring apparatus based on omnidirectional computer vision as claimed in claim 3, it is characterized in that: described microprocessor also comprises:
Traffic accident, parking offense and hypervelocity judge module are used for calculating resulting velocity information according to formula (28) and detect, and are hypervelocities if the average speed that calculates is just regarded as above the tunnel speed limit; If institute's tracking target object does not have motion within a certain period of time and on this track the fleet in time period block up detect for unimpeded situation just regard as more than general this tracking target to as if parking offense; If a plurality of destination objects of following the tracks of do not have motion within a certain period of time and on one or above track the fleet in time period block up to detect and just think more than general traffic accident might take place to unimpeded situation, the then fleet's jam situation rapid deterioration in time period on or the above track as time passes, the slope that fleet's jam situation changes surpass certain threshold value explanation the traffic accident probability takes place to be increased.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2006100516330A CN100459704C (en) | 2006-05-25 | 2006-05-25 | Intelligent tunnel safety monitoring apparatus based on omnibearing computer vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2006100516330A CN100459704C (en) | 2006-05-25 | 2006-05-25 | Intelligent tunnel safety monitoring apparatus based on omnibearing computer vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN1852428A CN1852428A (en) | 2006-10-25 |
CN100459704C true CN100459704C (en) | 2009-02-04 |
Family
ID=37133894
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB2006100516330A Expired - Fee Related CN100459704C (en) | 2006-05-25 | 2006-05-25 | Intelligent tunnel safety monitoring apparatus based on omnibearing computer vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN100459704C (en) |
Families Citing this family (58)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7868772B2 (en) | 2006-12-12 | 2011-01-11 | Industrial Technology Research Institute | Flame detecting method and device |
CN100585656C (en) * | 2007-03-14 | 2010-01-27 | 中国科学院自动化研究所 | An all-weather intelligent video analysis monitoring method based on a rule |
CN100468245C (en) * | 2007-04-29 | 2009-03-11 | 浙江工业大学 | Air conditioner energy saving controller based on omnibearing computer vision |
CN101316371B (en) * | 2007-05-31 | 2012-11-28 | 财团法人工业技术研究院 | Flame detecting method and device |
CN100538723C (en) * | 2007-10-26 | 2009-09-09 | 浙江工业大学 | The inner river ship automatic identification system that multiple vision sensor information merges |
JP4697262B2 (en) * | 2008-06-06 | 2011-06-08 | 村田機械株式会社 | Traveling vehicle and traveling vehicle system |
CN101365113B (en) * | 2008-09-18 | 2011-12-21 | 浙江工业大学 | Portable examination room omni-directional monitoring apparatus |
CN101686338B (en) * | 2008-09-26 | 2013-12-25 | 索尼株式会社 | System and method for partitioning foreground and background in video |
CN101393603B (en) * | 2008-10-09 | 2012-01-04 | 浙江大学 | Method for recognizing and detecting tunnel fire disaster flame |
CN101631237B (en) * | 2009-08-05 | 2011-02-02 | 青岛海信网络科技股份有限公司 | Video monitoring data storing and managing system |
CN101667294B (en) * | 2009-09-10 | 2011-10-05 | 天津工业大学 | Object detecting and tracking device |
CN101739027B (en) * | 2009-12-01 | 2015-02-18 | 蒋平 | Distributed visual sensing network-based movable navigation system |
CN101820531A (en) * | 2010-03-16 | 2010-09-01 | 江苏瀚远科技股份有限公司 | Video anti-counterfeiting monitoring device and monitoring method |
CN101799985B (en) * | 2010-03-18 | 2011-10-26 | 招商局重庆交通科研设计院有限公司 | Highway tunnel traffic identification method |
CN101807345B (en) * | 2010-03-26 | 2012-07-04 | 重庆大学 | Traffic jam judging method based on video detection technology |
CN101840571B (en) * | 2010-03-30 | 2012-03-28 | 杭州电子科技大学 | Flame detection method based on video image |
CN101908141B (en) * | 2010-08-04 | 2014-05-07 | 丁天 | Video smoke detection method based on mixed Gaussian model and morphological characteristics |
CN102163358B (en) * | 2011-04-11 | 2012-07-04 | 杭州电子科技大学 | Smoke/flame detection method based on video image analysis |
CN102297863B (en) * | 2011-05-17 | 2013-05-01 | 同济大学 | Method for evaluating smoke suppression performance of flame retardant asphalt mixture |
CN102442249A (en) * | 2011-06-01 | 2012-05-09 | 浙江天鸿汽车用品有限公司 | Panoramic automobile running live action monitoring system |
CN102194270B (en) * | 2011-06-02 | 2012-11-07 | 杭州电子科技大学 | Statistical method for pedestrian flow based on heuristic information |
CN102831786B (en) * | 2011-06-16 | 2016-05-18 | 深圳市安华信科技发展有限公司 | A kind of method for supervising of vehicle driving state in tunnel and supervising device thereof |
CN102376061B (en) * | 2011-08-26 | 2015-04-22 | 浙江工业大学 | Omni-directional vision-based consumer purchase behavior analysis device |
CN102435173B (en) * | 2011-09-21 | 2013-03-13 | 北京市市政工程研究院 | System and method for quickly inspecting tunnel disease based on machine vision |
EP2745632A1 (en) * | 2011-09-22 | 2014-06-25 | Koninklijke Philips N.V. | Imaging service using outdoor lighting networks |
CN102385812A (en) * | 2011-11-21 | 2012-03-21 | 中国科学技术大学苏州研究院 | Fire experiment simulation device in narrow and long limited space |
CN102521891B (en) * | 2011-12-06 | 2014-09-10 | 北京万集科技股份有限公司 | Method and system for detecting vehicle congestion at toll station |
CN103313039B (en) * | 2013-05-29 | 2016-12-28 | 长安大学 | A kind of freeway tunnel entrance security prompt device on daytime and reminding method |
CN103591982B (en) * | 2013-08-21 | 2016-01-06 | 国家电网公司 | A kind of monitoring method of electric power tunnel structure problem |
CN105338304A (en) * | 2014-08-13 | 2016-02-17 | 南宁市锋威科技有限公司 | Tunnel operation safety incident detection system based on video recognition |
CN104464307B (en) * | 2014-12-17 | 2016-08-10 | 合肥革绿信息科技有限公司 | A kind of tunnel traffic based on video blocks up event automatic detection method |
CN104777171A (en) * | 2015-03-28 | 2015-07-15 | 四川金码科技有限公司 | Tunnel health detection apparatus without affecting tunnel operation |
CN104751634B (en) * | 2015-04-22 | 2017-03-08 | 贵州大学 | The integrated application method of freeway tunnel driving image acquisition information |
CN105046223B (en) * | 2015-07-10 | 2018-09-18 | 招商局重庆交通科研设计院有限公司 | A kind of detection device and method of tunnel portal " black-hole effect " severity |
CN105096323A (en) * | 2015-07-28 | 2015-11-25 | 中国石油天然气股份有限公司 | Pool fire flame height measurement method based on visible light image processing |
CN106572325A (en) * | 2015-10-13 | 2017-04-19 | 上海宝信软件股份有限公司 | Virtual-reality-technology-based tunnel monitoring equipment inspection system |
WO2017149597A1 (en) * | 2016-02-29 | 2017-09-08 | 三菱電機株式会社 | Apparatus classification device |
CN107590418A (en) * | 2016-07-08 | 2018-01-16 | 尹航 | A kind of video smoke recognition methods based on behavioral characteristics |
CN106448161A (en) * | 2016-09-30 | 2017-02-22 | 广东中星微电子有限公司 | Road monitoring method and road monitoring device |
CN106448180B (en) * | 2016-10-24 | 2019-09-10 | 东南大学 | A kind of major long tunnel traffic events real-time detection method and detection system |
CN106845359A (en) * | 2016-12-26 | 2017-06-13 | 四川农业大学 | Tunnel portal driving prompt apparatus and method based on infrared emission |
CN106841216A (en) * | 2017-02-28 | 2017-06-13 | 浙江工业大学 | Tunnel defect automatic identification equipment based on panoramic picture CNN |
CN107169436A (en) * | 2017-05-11 | 2017-09-15 | 南宁市正祥科技有限公司 | Adaptive motion vehicle checking method |
CN107122757A (en) * | 2017-05-11 | 2017-09-01 | 南宁市正祥科技有限公司 | A kind of unstructured road detection method of real-time robust |
CN107067725A (en) * | 2017-05-26 | 2017-08-18 | 安徽皖通科技股份有限公司 | Tunnel road conditions dynamic early-warning and linkage method of disposal |
CN207571884U (en) * | 2017-07-06 | 2018-07-03 | 杭州盛棠信息科技有限公司 | Road occupying/parking behavior detection device and system |
CN108182696A (en) * | 2018-01-23 | 2018-06-19 | 四川精工伟达智能技术股份有限公司 | Image processing method, device and Multi-target position tracking system |
CN110120142B (en) * | 2018-02-07 | 2021-12-31 | 中国石油化工股份有限公司 | Fire smoke video intelligent monitoring early warning system and early warning method |
CN108765965A (en) * | 2018-06-11 | 2018-11-06 | 南京邮电大学 | A kind of system-wide section is energy saving with video frequency speed-measuring street lamp and speed-measuring method |
CN109033964B (en) * | 2018-06-22 | 2022-03-15 | 顺丰科技有限公司 | Method, system and equipment for judging arrival and departure events of vehicles |
CN109255957B (en) * | 2018-11-20 | 2020-09-04 | 湖北文理学院 | Method and system for monitoring vehicle running in tunnel |
CN109653798B (en) * | 2018-12-21 | 2020-04-28 | 浙江合作力科技股份有限公司 | Safety detection alarm system and method for tunnel |
CN109949581A (en) * | 2019-04-11 | 2019-06-28 | 浙江无极互联科技有限公司 | A kind of road condition judgment method based on computer vision |
CN110415530A (en) * | 2019-06-10 | 2019-11-05 | 许超贤 | A kind of intelligent internet traffic control system method |
WO2021217669A1 (en) * | 2020-04-30 | 2021-11-04 | 华为技术有限公司 | Target detection method and apparatus |
CN113393705B (en) * | 2021-05-31 | 2022-07-15 | 云南思码蔻科技有限公司 | Road condition management system based on reserved quantity of vehicles in tunnel or road |
CN115359587B (en) * | 2022-10-21 | 2023-02-17 | 常州海图信息科技股份有限公司 | Micro CVBS image acquisition system and method |
CN116153092B (en) * | 2022-12-29 | 2024-03-22 | 北京中科神通科技有限公司 | Tunnel traffic safety monitoring method and system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1404695A (en) * | 2000-12-06 | 2003-03-19 | 皇家菲利浦电子有限公司 | Method and apparatus to select the best video frame to transmit to a remote station for closed circuit television (CCTV)based residential area security monitoring |
JP2004007089A (en) * | 2002-05-30 | 2004-01-08 | Nippon Advantage Corp | Condition change determining device |
US20050162515A1 (en) * | 2000-10-24 | 2005-07-28 | Objectvideo, Inc. | Video surveillance system |
-
2006
- 2006-05-25 CN CNB2006100516330A patent/CN100459704C/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050162515A1 (en) * | 2000-10-24 | 2005-07-28 | Objectvideo, Inc. | Video surveillance system |
CN1404695A (en) * | 2000-12-06 | 2003-03-19 | 皇家菲利浦电子有限公司 | Method and apparatus to select the best video frame to transmit to a remote station for closed circuit television (CCTV)based residential area security monitoring |
JP2004007089A (en) * | 2002-05-30 | 2004-01-08 | Nippon Advantage Corp | Condition change determining device |
Also Published As
Publication number | Publication date |
---|---|
CN1852428A (en) | 2006-10-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN100459704C (en) | Intelligent tunnel safety monitoring apparatus based on omnibearing computer vision | |
CN100437660C (en) | Device for monitoring vehicle breaking regulation based on all-position visual sensor | |
CN108922188B (en) | Radar tracking and positioning four-dimensional live-action traffic road condition perception early warning monitoring management system | |
CN100419813C (en) | Omnibearing visual sensor based road monitoring apparatus | |
US9704060B2 (en) | Method for detecting traffic violation | |
CN1858551B (en) | Engineering car anti-theft alarm system based on omnibearing computer vision | |
CN102945603B (en) | Method for detecting traffic event and electronic police device | |
CN103824452B (en) | A kind of peccancy parking detector based on panoramic vision of lightweight | |
CN104508723B (en) | Image processing apparatus | |
CN100417223C (en) | Intelligent safety protector based on omnibearing vision sensor | |
KR101095528B1 (en) | An outomatic sensing system for traffic accident and method thereof | |
CN103400111B (en) | Method for detecting fire accident on expressway or in tunnel based on video detection technology | |
CN102724482A (en) | Intelligent visual sensor network moving target relay tracking system based on GPS (global positioning system) and GIS (geographic information system) | |
CN113936465B (en) | Traffic event detection method and device | |
CN105163014A (en) | Road monitoring device and method | |
CN105021528A (en) | Road weather detection device based on videos | |
CN106327880A (en) | Vehicle speed identification method and system based on monitored video | |
CN102589515B (en) | Foggy-weather distance measurement method and device thereof as well as distance pre-warning method and device thereof | |
CN209044870U (en) | A kind of traffic control system of real time data transmitting | |
Wei et al. | Adaptive video-based vehicle classification technique for monitoring traffic. | |
CN113128847A (en) | Entrance ramp real-time risk early warning system and method based on laser radar | |
Ooi et al. | A method for distinction of bicycle traffic violations by detection of cyclists' behavior using multi-sensors | |
CN219202510U (en) | Bend early warning prompting pile | |
KR102340902B1 (en) | Apparatus and method for monitoring school zone | |
Jokela et al. | Optical road monitoring of the future smart roads–preliminary results |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C17 | Cessation of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20090204 Termination date: 20110525 |