CN1858551A - Engineering car anti-theft alarm system based on omnibearing computer vision - Google Patents

Engineering car anti-theft alarm system based on omnibearing computer vision Download PDF

Info

Publication number
CN1858551A
CN1858551A CNA2006100516839A CN200610051683A CN1858551A CN 1858551 A CN1858551 A CN 1858551A CN A2006100516839 A CNA2006100516839 A CN A2006100516839A CN 200610051683 A CN200610051683 A CN 200610051683A CN 1858551 A CN1858551 A CN 1858551A
Authority
CN
China
Prior art keywords
formula
image
module
pixel
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2006100516839A
Other languages
Chinese (zh)
Other versions
CN1858551B (en
Inventor
汤一平
金海明
陈征
尤思思
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN200610051683A priority Critical patent/CN1858551B/en
Publication of CN1858551A publication Critical patent/CN1858551A/en
Application granted granted Critical
Publication of CN1858551B publication Critical patent/CN1858551B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Closed-Circuit Television Systems (AREA)

Abstract

This invention relates an engineering car anti-stealing alarm system based on omnibearing vision of a computer including a micro-processor, an omnibearing vision sensor for monitoring situation of anti-stealing to the car, a communication module communicating with the outside, in which, the output of the sensor is connected with the micro-processor and a digital image process technology and a network communication technology are used together with the sensor to judge if some one enters into the monitored sphere and find out some reasonable character criteria in the stolen period to inform monitoring persons to pay attention to the event and record the current video images for analyzing and solving the case later.

Description

Engineering car anti-theft alarm system based on omnidirectional computer vision
(1) technical field
The invention belongs to omnidirectional computer vision sensor technology, image recognition technology, Computer Control Technology and wireless network communication technique and be parked in the application aspect antitheft of engineering truck on building-site or the open-air vacant lot and engineering truck accessory.
(2) background technology
Flourish along with China's economic, the development of various engineering trucks is rapid, engineering truck generally all is parked in building-site or open-air vacant lot in night, and almost belong to blank in night to engineering truck is antitheft at present, a lot of people expect that never engineering truck also someone will steal, and after having worked at ordinary times, make the engineering truck of making good use of always arbitrarily be parked on roadside and the building site, do not have any burglary-resisting installation, this has stayed chance to cartheft.
In short two months of the second half year in 2005, three robber's car incidents take place in Shanghai Nanhui District in succession.Car thief's target not only is limited in luxurious private car, and expands to the engineering truck of parking on-site.
For the first time stolen incident occurs in morning October 15.On the building site, one place, 5614 lanes, law-abiding road, south, river, 5 tons of cranes of a value 180,000 are stolen.But, this cartheft one tunnel has broken a truck and a panel box, has knocked the gate at last again down.Yet so big sound does not wake the entrance guard of sleep on night duty unexpectedly up.This crane is not also retrieved, and in late into the night October 31, the not accomplished incident of engineering truck has taken place again for two groups to steal together in the Zhu Xi village, and two places are at a distance of 300 meters of less thaies.What specifically, cartheft was taken a fancy to is 50 tons of truck cranes of a value 1,500,000.
Though some cartheft is not the stealing car load, the accessory in the engineering truck is stolen away, such as theft wheel etc., the harm of bringing for the use of engineering truck has brought economic loss to the car owner of engineering truck; In addition because the driving power of engineering truck is big, after in a single day cartheft has started the engineering truck that stealing comes, in this sense the stolen harmfulness that society is caused of engineering truck will more than private car stolen come big; Engineering truck is as very important equipment in the engineering project simultaneously, and the stolen of engineering truck can the whole project progress of influence.
Therefore park on-site night, the parts of the engineering truck in field or roadside are antitheft or car load antitheft be a great problem.
Flame Image Process and computer vision are constantly new technologies of development, adopt computer vision to observe four purposes in principle, i.e. the debating of the feature extraction of pre-service, the bottom, mid-level features known and by the explanation of image to senior sight.In general, computer vision comprises principal character, Flame Image Process and image understanding.
Image is the extension of human vision.By machine vision, can find the difficult generation of engineering truck robber immediately exactly, this is a undisputable fact.The basis of image monitoring rapidity is that the information that vision is accepted is communication media with light; And image information is abundant and directly perceived, and more the advance works car is stolen difficult the detection and laid a good foundation, and other present various Detection Techniques all can not provide so abundant and information intuitively.
Engineering car anti-theft alarm system, be a kind of be core with the computing machine, the engineering truck burglary-resisting system that develops in conjunction with photoelectric technology, computer image processing technology and the communication technology.The antitheft image detection method of engineering truck is a kind of antitheft detection method of novel engineering truck based on Digital Image Processing and analysis.It utilizes the omni-directional visual camera that situation inside and outside the engineering truck is monitored, simultaneously to take the photograph consecutive image input computing machine, constantly carry out Flame Image Process and analysis, steal difficult feature by some engineering trucks and realize engineering car anti-theft alarm.
The omnibearing vision sensor ODVS that developed recently gets up (OmniDirectional Vision Sensors) provide a kind of new solution for the panoramic picture that obtains scene in real time.The characteristics of ODVS are looking away (360 degree), can become piece image to the Information Compression in the hemisphere visual field, and the quantity of information of piece image is bigger; When obtaining a scene image, the riding position of ODVS in scene is free more; ODVS is without run-home during monitoring environment; Algorithm is simpler during moving object in the detection and tracking monitoring range; Can obtain the realtime graphic of scene.Therefore the fully-directional visual system based on ODVS developed rapidly in recent years, just becoming the key areas in the computer vision research, IEEE held the special symposial (IEEE workshop on Omni-directional vision) of annual omni-directional visual since 2000.Also do not retrieve at present paper and the patent that omnibearing vision sensor is applied to the engineering car anti-theft alarm technical field.
Therefore, adopt omnibearing vision sensor ODVS and utilize digital image processing techniques, some features in conjunction with in the sad journey of engineering truck generation robber find rational characteristic criterion, particularly can further improve the engineering truck anti-theft security from space and time continuous protection equal angles solid, comprehensive.Therefore how by comprehensive optical image technology, the computer image processing technology and the network technology communication technology provide a kind of quick for the antitheft of engineering truck, the interior on a large scale visual information in monitoring field is gathered approach reliably, and the real-time omnidirectional images that obtains according to the ODVS video camera, judge by calculating whether the someone passes in and out monitoring range, simultaneously can by the wireless communication means notify the monitor staff note might the intrusion incident take place and record video image at that time so that ex-post analysis is solved a case, solve at present on-site, difficult problems such as the antitheft or antitheft difficulty of car load of the parts of the engineering truck in field or roadside.
(3) summary of the invention
Park night on-site in order to solve, the parts of the engineering truck in field or roadside are antitheft or car load antitheft be a great problem, the invention provides an optics that obtains real-time omnidirectional images, obtain the active situation that the computing method of real-time indeformable perspective and panoramic picture and certain hour obtained by being separated by panoramic picture calculate mobile in being monitored the scope field, the engineering car anti-theft alarm system that theftproof performance is good based on omnidirectional computer vision.
The present invention for the technical scheme that solves its technical matters employing is:
A kind of engineering car anti-theft alarm system based on omnidirectional computer vision, described engineering car anti-theft alarm system comprises microprocessor, be used to monitor the omnibearing vision sensor of the antitheft situation of engineering truck, be used for and extraneous communication module of communicating by letter, the output of described omnibearing vision sensor is connected with microprocessor, described omnibearing vision sensor comprises the evagination catadioptric minute surface in order to object in the reflection monitoring field, in order to the dark circles cone that prevents that anaclasis and light are saturated, transparent cylinder, camera, described evagination catadioptric minute surface is positioned at the top of transparent cylinder, evagination catadioptric minute surface down, the dark circles cone is fixed on the center of catadioptric minute surface male part, camera faces toward the evagination mirror surface up, and described camera is positioned at the virtual focus position of evagination mirror surface; Described microprocessor comprises:
The view data read module is used to read the video image information of coming from the omnibearing vision sensor biography;
The image data file memory module, the video image information that is used for reading is kept at storage unit by file mode;
Be used for the parameter of omnibearing vision sensor is demarcated, set up the material picture in space and the corresponding relation of the video image that is obtained;
The image stretching processing module, the circular video image that is used for reading expands into the panorama histogram;
The motion obj ect detection module, present frame live video image and a relatively stable reference image of being used for being obtained carry out the difference computing, and the computing formula of image subtraction is represented suc as formula (28):
f d(X,t 0,t i)=f(X,t i)-f(X,t 0) (28)
In the following formula, f d(X, t 0, t i) be to photograph the result who carries out image subtraction between image and reference image in real time; F (X, t i) be to photograph image in real time; F (X, t 0) be the reference image;
And with in the present image with the image subtraction computing formula of adjacent K frame shown in (29):
f d(X,t i-k,t i)=f(X,t i)-f(X,t i-k) (29)
In the following formula, f d(X, t I-k, t i) be to photograph the result who carries out image subtraction between image and adjacent K two field picture in real time; F (X, t I-k) image when being adjacent K frame;
As f d(X, t 0, t i) 〉=threshold value, f d(X, t I-k, t iWhen) 〉=threshold value is set up, be judged to be the motion object;
As f d(X, t 0, t i) 〉=threshold value, f d(X, t I-k, t i)<threshold value is judged stationary objects, and upgrades replacement reference image with formula (30):
f ( X , t 0 ) ⇐ f ( X , t i - k ) - - - ( 30 )
As f d(X, t 0, t i)<threshold value is judged to be stationary objects;
The connected region computing module, be used for present image is carried out mark, pixel grey scale is that 0 sub-district represents in this sub-district unmanned movable, pixel grey scale is that 1 this sub-district of expression has people's activity, whether the pixel of calculating in the present image equates with the pixel of some points adjacent around the current pixel, equate to be judged as gray scale and have connectedness, all are had connective pixel as a connected region; And then calculate its area and center of gravity according to the connected region of being tried to achieve; People's center of gravity obtains by the X that calculates resulting connected region area Si and this connected region, the accumulation calculated for pixel values of Y direction, and computing formula is calculated by formula (34),
X cg ( i ) = ∑ x , y ∈ S i x S i ; Y cg ( i ) = ∑ x , y ∈ S i y S i - - - ( 34 ) ;
Manikin is set up module, be used for setting up manikin by the summit of connected region qualification rectangle and the center of gravity of target, module is distributed a sign ID number automatically during for new detected subject object, human body in the motion is along with the visual angle change of omnibearing vision sensor, corresponding variation can take place in its size and shape, dynamically revises manikin;
Area size determined property module is used for obtaining its area Si according to each connected region, and following judgment rule is arranged:
If Si<threshold value 1, then this region of variation is a noise spot;
If Si>threshold value 2, then this region of variation is large-area variation, at first consider it is because the variation that the irradiation of light produces, but can not get rid of the people can carry some article, and therefore at this moment the big or small factor of influence Fs of setting regions is between 0.2 ~ 0.5;
If threshold value 1<Si<threshold value 2, then this region of variation is suspicious for the people is arranged, and at this moment setting regions size factor of influence Fs is 1;
The size of the span of threshold value 1 and threshold value 2 is according to seeing that from top to bottom (overlooking) adult average cross-section is at 0.12m 2About, decide the size of threshold value 1 and threshold value 2 then by the calibration result of fully-directional visual system, i.e. the size of pixel value.
The shape attribute judge module, be used for obtaining its area Si and ask its shape facility attribute according to each connected region, make comparisons with manikin, at first ask (length of the horizontal direction) mean breadth and the height of each connected region, mean breadth wi is used in the width average that is divided into 4 equal portions of height hi direction, and make a rectangle with this mean breadth wi and height hi, use formula (30) to calculate the area ratio of certain connected region and the rectangle of this connected region then
ϵ area i = S i w i * h i - - - ( 30 )
Calculate the ε of gained Area iValue is then carried out the calculating of width wi with the ratio of height hi of following rectangle, ε with formula (31) between 0.5 ~ 0.9 Area iValue was just got rid of this connected region less than 0.5 o'clock,
ϵ rate i = w i h i - - - ( 31 )
Calculate resulting ε Rate iValue is divided several regional radiuses according to spatial relation, and there is its judge index in each district in the radius more, such as in the zone radius scope of 10m ~ 12m, and ε Rate iValue is between 0.15 ~ 0.4, and setting shape attribute factor of influence Fsh is 1.
The activity characteristic judge module, be used for changing by the center of gravity of more adjacent two frame targets, just can obtain direction of motion, movement velocity and the acceleration of target, as the center of gravity of a certain target when the t frame at (xcg (t), ycg (t)), the center of gravity of a certain target is at (xcg (t+1) during the t+1 frame, ycg (t+1)), then direction of motion can be confirmed with (dx=xcg (t+1)-xcg (t), dy=ycg (t+1)-ycg (t)), movement velocity can be calculated by following formula
V t = dx 2 + dy 2 / Δt - - - ( 32 )
Acceleration of motion can be calculated by the velocity amplitude that (32) formula is tried to achieve,
at=V t-V t-1/Δt (33)
Δ t is the time interval between two frames in the formula, and Vt is the movement velocity from the t+l frame to the t frame, and at is a t frame acceleration constantly,
If people's movement velocity and acceleration surpass threshold range, judge it is not movement velocity and the acceleration that produces by human body self, setting kinetic characteristic factor of influence Fmove is 0, be starkly lower than this threshold range, setting kinetic characteristic factor of influence Fmove is between 0.2 ~ 0.5, and other set kinetic characteristic factor of influence Fmove is 1;
Behavior type characteristic judge module, be used to be recorded in and find to have the movable certain time interval T during that also surpasses of people in the monitoring range, just thinking has suspicion of theft, growth along with near the residence time Tduring engineering truck, just think that the possibility of suspicion of theft increases, Fbehavior is defined as Tduring positive correlation with the behavior type factor of influence, calculates with computing formula mountain formula (35)
Figure A20061005168300133
Multi-track target property judge module, being used to judge has two or more moving object target, and satisfying shape attribute judgement and area size determined property result simultaneously is under people's the situation, and Fgruop is set at 1 with multi-track target factor of influence;
Comprehensive judging treatmenting module is used on the basis of five kinds of above-mentioned judgements, judges comprehensively that then comprehensive judgment formula mountain formula (36) provides with the disconnected rate of minimizing erroneous judgement, has adopted weighting scheme in the comprehensive judgement,
W guard?alarm=K s×F s+K sh×F sh+K move×F move+(K behavior×F behavior+K group×F group)×F s
(36)
In the formula:
K sWeighting coefficient for subject object area attribute;
K ShWeighting coefficient for the subject object shape attribute;
K MoveWeighting coefficient for the subject object movement properties;
K BehaviorWeighting coefficient for subject object behavior type characteristic;
K GroupWeighting coefficient for the multi-object target property;
The abnormal alarm module is used for the W that calculates according to formula (36) Guard alarmThe result, as W Guard alarmGreater than preset threshold value, send warning information by communication module.
Further, described predetermined threshold value comprises Kattention, K alarm1, K alarm2, Kattention<Kalarm1<K alarm2;
If Kattention≤W Guard alarm≤ K alarm1 has been judged to be suspicious intrusion, points out, and starts image data file memory module record live video data;
If K alarm1<W Guard alarm≤ K alarm2, be judged to be and steal difficult early warning, by communication module send SMS message, voice call or email notification monitor staff be by the network validation image, and requires the scene to confirm, starts image data file memory module record live video data;
If K alarm2<W Guard alarm, be judged to be and steal difficult warning, automatic informing public security organ 110, the packets of information of circular contains the place of warning, the number-plate number, owner information.
Further again, described microprocessor also comprises the background maintenance module, and described background maintenance module comprises:
The background luminance computing unit is used to calculate average background brightness Yb computing formula as the formula (25):
Y ‾ b = ∑ x = 0 W - 1 ∑ y = 0 H - 1 Y n ( x , y ) ( 1 - M n ( x , y ) ) ∑ x = 0 W - 1 ∑ y = 0 H - 1 ( 1 - M n ( x , y ) ) - - - ( 25 )
In the formula (25), Yn (x y) is the brightness of each pixel of present frame, Mn (x y) is the mask table of present frame, and described mask table is to write down each pixel with one with the measure-alike array M of frame of video whether motion change is arranged, referring to formula (27):
Yb0 is the background luminance of former frame when being judged to be the motion object, and Yb1 is when being judged to be the motion object
The background luminance of first frame, being changed to of two frame mean flow rates:
ΔY=Yb1-Yb0 (26)
If Δ Y, then thinks the incident of turning on light that taken place greater than higher limit; If Δ Y, then thinks the incident of turning off the light that taken place less than certain lower limit; Between higher limit and lower limit, think then that light changes naturally as Δ Y;
The background adaptive unit is used for carrying out adaptive learning according to following formula (22) when light changes naturally:
X mix,bn+1(i)=(1-λ)X mix,bn(i)+λX mix,cn(i) (22)
In the formula: X Mix, cn(i) be present frame RGB vector, X Mix, bn(i) be present frame background RGB vector, X Min, bn+1(i) be next frame background forecast RGB vector, λ is the speed of context update; Changeless background (initial background) is used in λ=0; Present frame is used as a setting in λ=1; 0<λ<1, background is mixed by the background and the present frame of previous moment;
When light is caused that by switch lamp background pixel is reset according to present frame, referring to formula (23):
X mix,bn+1(i)X mix,cn(i) (23)。
Further, described microprocessor also comprises: noise is rejected module, is used for the average displacement of each pixel value with all values in its local neighborhood, as shown in Equation (16):
h[i,j]=(1/M)∑f[k,1] (32)
In the following formula (32), M is the pixel sum in the neighborhood.
Described image stretching processing module is used for according to a point (x on the circular omnidirectional images *, y *) and rectangle column panorama sketch on a point (x *, y *) corresponding relation, set up (x *, y *) and (x *, y *) mapping matrix, shown in the formula (21):
P *(x **,y **)← M× P *(x *,y *) (21)
In the following formula, M is a mapping matrix, P *(x *, y *) be the picture element matrix on the circular omnidirectional images, P *(x *, y *) be the picture element matrix on the rectangle column panorama sketch.
Described microprocessor also comprises: network transmission module, and the live video image that is used for being obtained is gone out by netcast in the mode of video flowing, so that the user can grasp field condition in real time by diverse network; The real-time play module, the live video image that is used for being obtained is played to display device by this module.
Install on the described engineering truck and be used to detect the suspended swinging object of whether swinging, described suspended swinging object be placed in omnibearing vision sensor near, described microprocessor is the involving vibrations detection module also, be used for producing swing or up-down vibration according to car stealer's suspended swinging object, it is difficult that judgement may take place to steal.
Described engineering truck is provided with the lighting unit that obtains monitoring omnidirectional images in the wavelength of visible light scope, and described lighting unit is ordinary lamps source or infrared lamp source.
Described omnibearing vision sensor is installed in the center of top position of engineering truck.
Principle of work of the present invention is: designed engineering car anti-theft alarm system is after being provided with the function of setting up defences, adopt the computing machine omnibearing vision sensor to realize comprehensive realtime graphic anti-thefting monitoring to outside in the supervision engineering truck, from captured image, identify human body image, calculate the behavior of judging intrusion or interfering engineering truck by various features, and can prevent to use without approval engineering truck.
At first be the manufacturing technology scheme of the opticator of omnibearing vision sensor (ODVS) camera head, the ODVS camera head is mainly constituted by vertically downward catadioptric mirror with towards last camera.It is concrete that to constitute be to be fixed on bottom by the cylinder of transparent resin or glass by the image unit that collector lens and CCD constitute, the top of cylinder is fixed with the catadioptric mirror of a downward deep camber, the dark circles cone that between catadioptric mirror and collector lens, has a diameter to diminish gradually, this coniform body is fixed on the middle part of catadioptric mirror, and the pyramidal purpose of dark circles is to cause light in cylinder inside light reflex saturated and that produce by the cylinder body wall in order to prevent superfluous light from injecting.Fig. 2 is the schematic diagram of the optical system of expression omnibearing imaging device of the present invention.
Catadioptric omnidirectional imaging system can be carried out imaging analysis with the pin-hole imaging model, but obtaining the perspective panorama picture must be to the contrary projection of the real scene image of gathering, thereby calculated amount is big, particularly be used in monitoring multiple goal, monitor as the movable of people or by the activity that the people produces, must satisfy the requirement of real-time.
Coordinate in order to ensure the horizontal coordinate of object point in the scene that is monitored and corresponding picture point is linear promptly undistorted in horizontal scene, be installed in be as the omnibearing vision device of theftproof monitoring function and be positioned at the central part of engineering truck 3 meters away from pavement-height more than or in the pilothouse part, can monitor monitoring field situation in the horizontal direction around the engineering truck for being installed in omnibearing vision sensor in the pilothouse, therefore when the catadioptric minute surface of design omnibearing vision device, will guarantee in the horizontal direction indeformable.
At first select for use CCD (CMOS) device and imaging len to constitute camera in the design, preresearch estimates system physical dimension on the basis that the camera inner parameter is demarcated is determined the mirror surface shape parameter according to the visual field of short transverse then.
As shown in Figure 1, the projection centre C of camera is the horizontal scene h of distance place above horizontal scene, and the summit of catoptron is above projection centre, apart from projection centre zo place.Be that true origin is set up coordinate system with the camera projection centre among the present invention, the face shape of catoptron is with z (X) function representation.The pixel q of distance images central point ρ has accepted from horizontal scene 0 point (apart from Z axle d), at the light of mirror M point reflection in as the plane.Horizontal scene is undistorted to require the coordinate of the horizontal coordinate of scene object point and corresponding picture point linear;
d(ρ)=αρ (1)
ρ is and the distance of the face shape central point of catoptron in the formula (1), and α is the magnification of imaging system.
If the normal that catoptron is ordered at M and the angle of Z axle are γ, the angle of incident ray and Z axle is Φ, and the angle of reflection ray and Z axle is θ.Then
tg ( x ) = d ( x ) - x z ( x ) - h - - - ( 2 )
tgγ = dz ( x ) dx - - - ( 3 )
tg ( 2 γ ) = 2 dz ( x ) dx 1 - d 2 z ( x ) dx 2 - - - ( 4 )
tgθ = ρ f = x z ( x ) - - - ( 5 )
By reflection law
2 γ = φ - θ
Figure A20061005168300176
Obtain the differential equation (7) by formula (2), (4), (5) and (6)
d 2 z ( x ) dx 2 + 2 k dz ( x ) dx - 1 = 0 - - - ( 7 )
In the formula; k = z ( x ) [ z ( x ) - h ] + x [ d ( x ) - x ] z ( x ) [ d ( x ) - x ] + x [ z ( x ) - h ] - - - ( 8 )
Obtain the differential equation (9) by formula (7)
dz ( x ) dx + k - k 2 + 1 = 0 - - - ( 9 )
Obtain formula (10) by formula (1), (5)
d ( x ) = afx z ( x ) - - - ( 10 )
By formula (8), (9), (10) and starting condition, separate the digital solution that the differential equation can obtain reflecting mirror surface shape.The main digital reflex mirror of system's physical dimension is from the distance H o and the aperture of a mirror D of camera.Select suitable camera according to application requirements during the refractive and reflective panorama system design, calibrate Rmin, the focal distance f of lens is determined the distance H o of catoptron from camera, calculates aperture of a mirror Do by (1) formula.
Determining of systematic parameter:
Determine systematic parameter af according to the visual field of using desired short transverse.Obtain formula (11) by formula (1), (2) and (5), done some simplification here, with z (x) ≈ z 0, main consideration is smaller with respect to the change in location of minute surface and camera for the height change of minute surface;
tgφ = ( af - z 0 ) ρ f z 0 - h - - - ( 11 )
With the inconocenter point largest circumference place in the center of circle as the plane ρ = R min → ω max = R min f Corresponding visual field is φ max.Then can obtain formula (12);
ρ f = ( z 0 - h ) tg φ max ω max + z 0 - - - ( 12 )
The imaging simulation adopts the direction opposite with actual light to carry out.If light source is in the camera projection centre, equally spaced selected pixels point in the picture plane by the light of these pixels, intersects with surface level after mirror reflects, if intersection point is equally spaced, illustrates that then catoptron has the distortionless character of horizontal scene.The imaging simulation can be estimated the imaging character of catoptron on the one hand, can calculate aperture of a mirror and thickness exactly on the other hand.
Further specify the present invention and in implementation process, relate to Several Key Problems such as demarcation and Target Recognition:
(1) how to demarcate the pixel distance in the imaging plane of omnibearing vision sensor and the corresponding relation of actual three dimensions distance, and on this basis, moving image is classified.Because omni-directional visual video camera imaging plane is two-dimentional, with the pixel is measurement unit, on imaging plane, because engineering truck is parked in field and roadside, therefore very complicated by the observed situation of change of omnibearing vision sensor, the branch that some may include the reflection of various light and irradiation, various vehicle, various pet and other such as moonlight, stream cloud, shake etc. is similar to the moving object shadow, for the ease of further following the tracks of and behavioural analysis, the correct classification of moving target is necessary fully.Sorting technique has based on the classification of shape, size information with based on the classification of kinetic characteristic.
(2) how to carry out target following, tracking is equivalent to the corresponding matching problem of creating features relevant such as position-based, speed, shape, behavior, multiple goal in continuous images interframe, attribute information with personage in the activity among the present invention combines, and a kind of effective, robustness method for tracking target high, that real-time is good is provided.This tracking be actually based on model, based on the zone, based on active contour and based on color characteristic etc. tracking a kind of comprehensive.
The demarcation of omni-directional visual camera field of view distance relates to the theory of imaging geometry, and the three-dimensional scenic of objective world is projected the two-dimentional image plane of video camera, need set up the model of video camera and describe.These image transformations relate to the conversion between the different coordinates.In the imaging system of video camera, what relate to has following 4 coordinate systems; (1) real-world coordinates is XYZ; (2) with the video camera be the coordinate system x^y^z ^ that formulate at the center; (3) photo coordinate system, formed photo coordinate system x*y*o* in video camera; (4) computer picture coordinate system, juice is calculated the used coordinate system MN of machine internal digital image, is unit with the pixel.
According to the different transformational relation of above several coordinate systems, just can obtain needed omnidirectional vision camera imaging model, converse the corresponding relation of two dimensional image to three-dimensional scenic.The approximate perspective imaging analytical approach that adopts catadioptric omnibearing imaging system among the present invention is with the formed corresponding relation that is converted to three-dimensional scenic as the planimetric coordinates two dimensional image in the video camera, Fig. 3 is general perspective imaging model, d is people's height, ρ is the image height of human body, t is the distance of human body, and F is the image distance (equivalent focal length) of human body.Can obtain formula (13)
d = t F ρ - - - ( 13 )
When the design of the catadioptric omnibearing imaging system that above-mentioned horizontal scene does not have, require the coordinate of the horizontal coordinate of scene object point and corresponding picture point linear, represent suc as formula (1); Comparison expression (13), (1), horizontal as can be seen scene does not have the be imaged as perspective imaging of the catadioptric omnibearing imaging system of distortion to horizontal scene.Therefore with regard to horizontal scene imaging, the catadioptric omnibearing imaging system that horizontal scene can not had distortion is considered as having an X-rayed camera, and α is the magnification of imaging system.If the projection centre of this virtual perspective camera is C point (seeing accompanying drawing 3), its equivalent focal length is F.Comparison expression (13), (1) formula can obtain formula (14);
α = t F ; t = h - - - ( 14 )
Obtain formula (15) by formula (12), (14)
F = fhω max ( z 0 - h ) tg φ max + z 0 ω max 0 - - - ( 15 )
Carry out the system imaging simulation according to above-mentioned omnidirectional vision camera imaging model, by the camera projection centre send through in the pixel planes equidistantly after the reflection of the light family of pixel, intersection point on the surface level of distance projection centre 3m is equally spaced basically, as shown in Figure 4.Therefore according in the above-mentioned design concept this patent relation between the coordinate of the coordinate of level road and corresponding comprehensive picture point being reduced to linear relationship, that is to say that design by mirror surface be XYZ to the conversion of photo coordinate system with real-world coordinates can be the linear dependence of ratio with magnification α.Be conversion below from photo coordinate system to the used coordinate system of computer-internal digital picture, the image coordinate unit that uses in the computing machine is the number of discrete pixel in the storer, so also need round the imaging plane that conversion just can be mapped to computing machine to reality as the coordinate on plane, its conversion expression formula is for to be provided by formula (16);
M = O m - x * S x ; N = O n - y * S y ; - - - ( 16 )
In the formula: Om, On are respectively the line number and the columns at the some pixel place that the initial point of image plane shone upon on the computer picture plane; Sx, Sy are respectively scale factor in the x and y direction.Determining of Sx, Sy is by placing scaling board apart from the Z place between camera and mirror surface, video camera is demarcated the numerical value that obtains Sx, Sy, and unit is (pixel); Om, On.Determine it is that unit is (pixel) according to selected camera resolution pixel.
Further, 360 ° of comprehensive principles of making a video recording are described, a point A (x1 on the space, y1, z1) through catadioptric 1 direct reflection to the lens 4 to should have a subpoint P1 (x*1, y*1), the light of scioptics 4 becomes directional light and projects CCD image unit 5, microprocessor 6 reads in this ring-type image by video interface, adopts software that this ring-type image is launched to obtain omnibearing image and be presented on the display unit 7 or by video server to be distributed on the webpage.
Further, on method of deploying, adopted a kind of algorithm of approximate expansion fast in this patent, can drop to minimum, kept Useful Information simultaneously as much as possible with time loss with to the requirement of various parameters.Launching rule has three,
(1) the X* axle is a reference position, launches by counterclockwise mode;
(2) the intersection point O of X* axle and internal diameter r among the left figure corresponds to the initial point O (0,0) in the lower left corner among the right figure;
(3) width of the right figure after the expansion equals the girth of the circle shown in the dotted line among the left figure.Wherein broken circle is the concentric circles of external diameter in the left figure, and its radius r 1=(r+R)/2.
If the center of circle O* coordinate of circular diagram (x*0, y*0), the histogram lower left corner origin O** (0,0) of expansion, any 1 P material in the histogram=(the x material, y**) pairing coordinate in circular diagram be (x*, y*).Below we need ask be (x*, y*) and the corresponding relation of (x material, y material).Can obtain following formula according to geometric relationship:
β=tan -1(y */x *) (17)
r1=(r+R)/2 (18)
Make the radius r 1=(r+R)/2 of broken circle, purpose is in order to allow the figure after launching seem that deformation is even.
x *=y */(tan(2x **/(R+r))) (19)
y *=(y **+r)cosβ (20)
Can obtain from formula (19), (20) on the circular omnidirectional images a point (x*, y*) and a point (x**, corresponding relation y**) on the rectangle panorama sketch.This method has come down to do the process of an image interpolation.After the expansion, the image of dotted line top is that horizontal compression is crossed, and the image of dotted line below is that cross directional stretch is crossed, dotted line originally on one's body point then remain unchanged.
For satisfy calculate in real time needs equally can according to a point on the circular omnidirectional images (x*, y*) and a point on the rectangle panorama sketch (x**, corresponding relation y**) set up that (x* is y*) with (x**, mapping matrix y**).Because this one-to-one relationship can be being transformed into indeformable panoramic picture by the mapping matrix method.Can set up formula (21) relation by the M mapping matrix.
P **(x **,y **)←M×P *(x *,y *) (21)
According to formula (21), for each the pixel P* on the imaging plane (x*, y*) on omnidirectional images, have a some P** (x**, y**) correspondence, set up the M mapping matrix after, the task that realtime graphic is handled can obtain simplifying.
The captured monitoring image of omnibearing vision sensor is the 3-D view of a solid, the central top (as the top of the pilothouse of engineering truck) that omnibearing vision sensor is installed in monitored space just can monitor the situation of all sites in monitoring field, and there is not a dead angle, simultaneously point on institute's monitored space becomes mapping relations with point in the picture frame, can calculate the locus, behavior place that engineering truck takes place to invade or interfere by this mapping relations, to realize this intrusion or to interfere the behavior of engineering truck to carry out the accuracy rate that process monitoring improves engineering car anti-theft alarm.
Beneficial effect of the present invention mainly shows: obtain real-time indeformable perspective and panoramic picture by omnibearing vision sensor, and the panoramic picture that certain hour obtained by being separated by calculates the active situation of mobile in being monitored the scope field, whether judgement in real time has is stolen difficult the generation, theftproof performance is good, has solved the antitheft difficult problem of engineering truck.
(4) description of drawings
Fig. 1 is the omni-directional visual optical schematic diagram;
Fig. 2 is a kind of hardware configuration schematic diagram of the engineering car anti-theft alarm system based on omnidirectional computer vision;
Fig. 3 is the perspective projection imaging model synoptic diagram of omnibearing vision device and general perspective imaging model equivalence;
Fig. 4 is the omnibearing vision device undeformed simulation synoptic diagram of epigraph in the horizontal direction;
Fig. 5 is the process flow diagram that the engineering car anti-theft alarm monitoring is handled in the omnibearing vision device;
Fig. 6 is the associated diagram based on each module in the engineering car anti-theft alarm system of omnidirectional computer vision.
(5) embodiment
Invention will be further ex-plained with reference to the appended drawings.
Embodiment 1
With reference to Fig. 1, Fig. 2, Fig. 3, Fig. 4, Fig. 5 and Fig. 6, a kind of engineering car anti-theft alarm system based on omnidirectional computer vision, comprise microprocessor 6, be used to monitor the omnibearing vision sensor 13 of the antitheft situation of engineering truck, be used for and extraneous communication module of communicating by letter, the output of described omnibearing vision sensor 13 is connected with microprocessor 6 by usb 14, described omnibearing vision sensor 13 comprises the evagination catadioptric minute surface 1 in order to object in the reflection monitoring field, in order to the dark circles cone 2 that prevents that anaclasis and light are saturated, transparent cylinder 3, camera 5, described evagination catadioptric minute surface 1 is positioned at the top of transparent cylinder 3, evagination catadioptric minute surface 1 down, dark circles cone 2 is fixed on the center of catadioptric minute surface male part, camera 5 facing to evagination mirror surface 1 up, described camera 5 is positioned at the virtual focus position of evagination mirror surface, and camera 5 also comprises camera lens 4; Described microprocessor comprises:
View data read module 16 is used to read the video image information of coming from the omnibearing vision sensor biography, and carries out the image pre-service;
Image data file memory module 18, the video image information that is used for reading is kept at storage unit by file mode;
Transducer calibration module 17 is used for the parameter of omnibearing vision sensor is demarcated, and sets up the material picture in space and the corresponding relation of the video image that is obtained;
Image stretching processing module 19, the circular video image that is used for reading expands into the panorama histogram;
Motion obj ect detection module 23, present frame live video image and a relatively stable reference image of being used for being obtained carry out the difference computing, and the computing formula of image subtraction is represented suc as formula (28):
f d(X,t 0,t i)=f(X,t i)-f(X,t 0) (28)
In the following formula, f d(X, t 0, t i) be to photograph the result who carries out image subtraction between image and reference image in real time; F (X, t i) be to photograph image in real time; F (X, t 0) be the reference image;
And with in the present image with the image subtraction computing formula of adjacent K frame shown in (29):
f d(X,t i-k,t i)=f(X,t i)-f(X,t i-k) (29)
In the following formula, f d(X, t I-k, t i) be to photograph the result who carries out image subtraction between image and adjacent K two field picture in real time; F (X, t I-k) image when being adjacent K frame;
As f d(X, t 0, t i) 〉=threshold value, f d(X, t I-k, t iWhen) 〉=threshold value is set up, be judged to be the motion object;
As f d(X, t 0, t i) 〉=threshold value, f d(X, t I-k, t i)<threshold value is judged stationary objects, and upgrades replacement reference image with formula (30):
f ( X , t 0 ) ⇐ f ( X , t i - k ) - - - ( 30 )
As f d(X, t 0, t i)<threshold value is judged to be stationary objects;
The connected region computing module, be used for present image is carried out mark, pixel grey scale is that 0 sub-district represents in this sub-district unmanned movable, pixel grey scale is that 1 this sub-district of expression has people's activity, whether the pixel of calculating in the present image equates with the pixel of some points adjacent around the current pixel, equate to be judged as gray scale and have connectedness, all are had connective pixel as a connected region; And then calculate its area and center of gravity according to the connected region of being tried to achieve; People's center of gravity obtains by the X that calculates resulting connected region area Si and this connected region, the accumulation calculated for pixel values of Y direction, and computing formula is calculated by formula (34),
X cg ( i ) = ∑ x x , y ∈ S i S i ; Y cg ( i ) = ∑ y x , y ∈ S i S i - - - ( 34 ) ;
Manikin is set up module 34, be used for setting up manikin by the summit of connected region qualification rectangle and the center of gravity of target, module is distributed a sign ID number automatically during for new detected subject object, human body in the motion is along with the visual angle change of omnibearing vision sensor, corresponding variation can take place in its size and shape, dynamically revises manikin;
Area size determined property module is used for obtaining its area Si according to each connected region, and following judgment rule is arranged:
If Si<threshold value 1, then this region of variation is a noise spot;
If Si>threshold value 2, then this region of variation is large-area variation, at first consider it is because the variation that the irradiation of light produces, but can not get rid of the people can carry some article, and therefore at this moment the big or small factor of influence Fs of setting regions is between 0.2 ~ 0.5;
If threshold value 1<Si<threshold value 2, then this region of variation is suspicious for the people is arranged, and at this moment setting regions size factor of influence Fs is 1;
The size of the span of threshold value 1 and threshold value 2 is according to seeing that from top to bottom (overlooking) adult average cross-section about 0.12m2, decides the size of threshold value 1 and threshold value 2, i.e. the size of pixel value then by the calibration result of fully-directional visual system.
The shape attribute judge module, be used for obtaining its area Si and ask its shape facility attribute according to each connected region, make comparisons with manikin, at first ask (length of the horizontal direction) mean breadth and the height of each connected region, mean breadth wi is used in the width average that is divided into 4 equal portions of height hi direction, and make a rectangle with this mean breadth wi and height hi, use formula (30) to calculate the area ratio of certain connected region and the rectangle of this connected region then
ϵ area i = S i w i * h i - - - ( 30 )
Calculate the ε of gained Area iValue is then carried out the calculating of width wi with the ratio of height hi of following rectangle, ε with formula (31) between 0.5 ~ 0.9 Area iValue was just got rid of this connected region less than 0.5 o'clock,
ϵ rate i = w i h i - - - ( 31 )
Calculate resulting ε Rate iValue is divided several regional radiuses according to spatial relation, and there is its judge index in each district in the radius more, such as in the zone radius scope of 10m ~ 12m, and ε Rate iValue is between 0.15 ~ 0.4, and setting shape attribute factor of influence Fsh is 1.
The activity characteristic judge module, be used for changing by the center of gravity of more adjacent two frame targets, just can obtain direction of motion, movement velocity and the acceleration of target, as the center of gravity of a certain target when the t frame at (xcg (t), ycg (t)), the center of gravity of a certain target is at (xcg (t+1) during the t+1 frame, ycg (t+1)), then direction of motion can be confirmed with (dx=xcg (t+1)-xcg (t), dy=ycg (t+1)-ycg (t)), movement velocity can be carried out juice by following formula and be calculated
V t = dx 2 + dy 2 / Δt - - - ( 32 )
Acceleration of motion can be calculated by the velocity amplitude that (32) formula is tried to achieve,
a t=V t-V t-1/Δt (33)
Δ t is the time interval between two frames in the formula, and Vt is the movement velocity from the t+l frame to the t frame, and at is a t frame acceleration constantly,
If people's movement velocity and acceleration surpass threshold range, judge it is not movement velocity and the acceleration that produces by human body self, setting kinetic characteristic factor of influence Fmove is 0, be starkly lower than this threshold range, setting kinetic characteristic factor of influence Fmove is between 0.2 ~ 0.5, and other set kinetic characteristic factor of influence Fmove is 1;
Behavior type characteristic judge module, be used to be recorded in and find to have the movable certain time interval T during that also surpasses of people in the monitoring range, just thinking has suspicion of theft, growth along with near the residence time Tduring engineering truck, just think that the possibility of suspicion of theft increases, Fbehavior is defined as Tduring positive correlation with the behavior type factor of influence, is calculated by formula (35) with computing formula
Figure A20061005168300242
Multi-track target property judge module, being used to judge has two or more moving object target, and satisfying shape attribute judgement and area size determined property result simultaneously is under people's the situation, and Fgruop is set at 1 with multi-track target factor of influence.
Comprehensive judging treatmenting module 32 is used on the basis of five kinds of above-mentioned judgements, judges comprehensively that then comprehensive judgment formula is provided by formula (36) with the disconnected rate of minimizing erroneous judgement, has adopted weighting scheme in the comprehensive judgement,
W guard?alarm=K s×F s+K sh×F sh+K move×F move+(K behavior×F behavior+K group×F goup)×F s
(36)
In the formula:
K sWeighting coefficient for subject object area attribute;
K ShWeighting coefficient for the subject object shape attribute;
K MoveWeighting coefficient for the subject object movement properties;
K BehaviorWeighting coefficient for subject object behavior type characteristic;
K GroupWeighting coefficient for the multi-object target property;
The abnormal alarm module is used for the W that calculates according to formula (36) Guard alarmThe result, as W Guard alarmGreater than preset threshold value, send warning information by communication module.
The captured monitoring image of omnibearing vision sensor is the 3-D view of a solid, the central top (as the top of the pilothouse of engineering truck) that omnibearing vision sensor is installed in monitored space just can monitor the situation of all sites in monitoring field, and there is not a dead angle, simultaneously point on institute's monitored space becomes mapping relations with point in the picture frame, can calculate the locus, behavior place that engineering truck takes place to invade or interfere by this mapping relations, to realize this intrusion or to interfere the behavior of engineering truck to carry out the accuracy rate that process monitoring improves engineering car anti-theft alarm.
After obtaining comprehensive video information, next must carry out evaluation works such as background elimination, target extraction, target following as a kind of engineering truck anti-theft device based on omnidirectional computer vision.It is the problem that brightness changes that background is eliminated the problem that at first will solve, monitor the unexpected irradiation (as car light irradiation of the vehicle of passing by) that has light for the engineering truck that park in open-air and roadside, because the deficiency of light need be equipped with the floor light light source omnibearing vision sensor can be captured at engineering truck visual pattern on every side in night, these can cause that all background light source changes, and therefore the background model that adopts in background is eliminated will adapt to these above-mentioned variations.
For video monitoring, because the comprehensive scene visual field is bigger, human body shared ratio in entire image is less, so personage's motion can be regarded approximate rigid motion as; In addition, the scene of video monitoring is fixed, and can think to have the relatively background of fixed range, and the Fast Segmentation Algorithm that therefore can adopt background to cut algorithm is come motion personage or the object in the real-time detection and tracking video monitoring; Background is eliminated and to be based on background and to cut algorithm and detect the key of motion object, its directly influence detect integrality and accuracy of motion object.Adopted the background adaptive method among the present invention, its core concept is that each background pixel is used 1 group of vector; (Xmix bi) represents the permission value (i is a frame number) of legal background pixel, and adopts IIR filtering that it is carried out following renewal the current mixed number that RGB changes.
(1) changes naturally when light, such as being at dusk to the late into the night, the variation of the available light from the late into the night to the early morning then (be not switch street lamp or engineering truck provide illuminating lamp for oneself cause), and no abnormal object is when existing, and 1 group of vector (being respectively RGB) carries out adaptive learning:
X mix,bn+1(i)=(1-λ)X mix,bn(i)+λX mix,cn(i) (22)
In the formula: X Mix, cn(i) be present frame RGB vector, X Mix, bn(i) be present frame background RGB vector, X Mix, bn+1(i) be next frame background forecast RGB vector, λ is the speed of context update: λ=0, uses changeless background (initial background); Present frame is used as a setting in λ=1; 0<λ<1, background is mixed by the background and the present frame of previous moment.
(2) when light has sudden change (provide by street lamp or engineering truck the switch of illuminating lamp causes), 1 group of vector press the present frame replacement:
X mix,bn+1(i)=X mix,cn(i) (23)
(3) when object enters monitoring range, background remains unchanged.For avoiding that the partial pixel study of motion object is background pixel, adopt:
X mix,bn+1(i)=X mix,bn(i) (24)
X in the following formula Mix, bn+1(i) (i=1,2,3) represent R respectively, G, and 3 components of B, for simplicity, above-mentioned formula has omitted coordinate (x, y) part of each pixel.
The variation of these background luminances such as switch lamp incident should not cause system alarm, thereby carries out the rate of false alarm that the background luminance analysis helps to reduce system.Background luminance uses average background brightness Yb to measure, and computing formula is provided by formula (25),
Y ‾ b = ∑ x = 0 W - 1 ∑ y = 0 H - 1 Y n ( x , y ) ( 1 - M n ( x , y ) ) ∑ x = 0 W - 1 ∑ v = 0 H - 1 ( 1 - M n ( x , y ) ) - - - ( 25 )
In the formula (25), (x y) is the brightness of each pixel of present frame to Yn, and (x y) is the mask table of present frame to Mn.The background luminance of former frame when representing to find exception object is arranged with Yb0, the background luminance of first frame when Yb1 represents to detect exception object, being changed to of two frame mean flow rates:
ΔY=Yb1-Yb0 (26)
If Δ Y is greater than certain value then think the incident of turning on light that taken place, if Δ Y is less than certain negative value then think the incident of turning off the light that taken place.Present frame is reset with formula (23) according to above-mentioned judged result.The background adaptive algorithm is finished in background refresh process module 29.
Described mask table is to write down each pixel with one with the measure-alike array M of frame of video whether motion change is arranged, and this array is called mask mapping table (Mask Map):
Figure A20061005168300262
Array M is the bianry image of motion object, is partitioned into the motion object thereby not only can be used to the mask frame of video, also can be used for tracking, analysis and the classification of motion object.
Described background cuts algorithm and is also referred to as difference method, calculates in moving region detection module 23, and it is a kind of image processing method that is usually used in detected image variation and moving object. detects those pixel portion that have light source point to exist according to the correspondence relation of three dimensions and image pixel; A more stable reference image at first will be arranged; And this reference image is stored in the memory of computer; And by above-mentioned Adaptive background subtraction method the reference image is dynamically updated; Carry out image subtraction by photographing in real time between image and this reference image; The regional luminance that the result who subtracts each other changes strengthens; The computing formula of image subtraction represents suc as formula (28)
f d(X,t 0,t i)=f(X,t i)-f(X,t 0) (28)
F in the formula d(X, t 0, t i) be to photograph the result who carries out image subtraction between image and reference image in real time; F (X, t i) be to photograph image in real time, be equivalent to the X in the formula (22) Mix, cn(i); F (X, t 0) be the reference image, be equivalent to the X in the formula (22) Mix, bn(i).
Include noise in the actual image signal, and generally all show as high-frequency signal, therefore in identifying, will reject the image border point that produces by noise.
Described rejecting is by image border point that noise produced, use the method for neighbours territory traversal in the present invention, the value that the average gray value of the neighborhood interior pixel that it is determined with the filtering mask removes each pixel of alternate image, be of the average displacement of each pixel value with all values in its local neighborhood, shown in formula (29):
h[i,j]=(1/M)∑f[k,1] (29)
In the formula, M is the pixel sum in the neighborhood, is taken as 4 among the present invention.
Connectedness between pixel is to determine a key concept in zone.In two dimensional image, the individual adjacent pixels of m (m<=8) is arranged around the hypothetical target pixel, if this pixel grey scale equate with the gray scale of some some A in this m pixel, claim this pixel so and put A to have connectedness.Connectedness commonly used has 4 connected sums 8 to be communicated with.4 are communicated with four points in upper and lower, left and right of generally choosing object pixel.8 are communicated with and then choose object pixel all neighbor in two-dimensional space.All are had connective pixel then constituted a connected region as a zone.
Described connected region is calculated and is mainly solved in image processing process, a width of cloth bianry image, and its background and target have gray-scale value 0 and 1 respectively.To such bianry image, carry out mark to target, calculate each clarification of objective to discern, in the design of multiple goal real-time tracking system, need a kind of connected component labeling algorithm of saving internal memory fast.We are that 0 sub-district represents that this sub-district do not have the monitor activities object with pixel, if there is monitored object 1 this sub-district of expression.So can adopt connection composition scale notation to carry out the merging of defect area.The connection labeling algorithm can find all the connection compositions in the image, and the institute in the same connection composition is distributed same mark a little.Fig. 5 is for being communicated with the mark schematic diagram.Be the connected region algorithm below,
1) from left to right, scan image from top to bottom;
2) if pixel is 1, then:
If upper point and left side point have a mark, then duplicate this mark.
If have identical mark, duplicate this mark at 2.
If 2 have different marks, then duplicate a little mark and with in two marks input table of equal value as mark of equal value.
Otherwise give the new mark of this picture element distribution and this mark is imported table of equal value.
3) go on foot if need to consider more point then get back to the 2nd.
4) find minimum mark each of equal value concentrating of equivalence table.
5) scan image replaces each mark with the minimum mark in the table of equal value.
Interframe is cut apart major issue: (1) utilizes the segmentation result of previous frame to instruct cutting apart of present frame as far as possible, thereby raises the efficiency, and (2) realize the corresponding relation of same moving object in different frame.Therefore, algorithm must safeguard that a storage system preserves the segmentation result of previous frame and the present parameters of target motion.
" object matching " realizes the target marriage problem between two frames, mainly is to carry out optimum matching according to information such as locus (comprising motion prediction), area size, shape, texture colors to judge.The information that " object matching " arranged further profound level judges whether it is that the people is to improve discrimination; The present invention is positioned the video monitoring object for monitoring to the people who steals engineering truck, therefore be necessary to find out a series of people's attribute information so that realizing the object between monitored object and these attribute informations mates, and solve target pairing and target following problem from the information between different frame based on this; Attribute information as available people in the video monitoring has the following aspects: 1) area size attribute, from the people's of angle that overlooks sectional area greatly about 0.12m 2About; 2) shape attribute is from the people's of angle that overlooks shape ovalize; 3) activity characteristic, the mass motion speed of human body self or acceleration (not by with any instrument) have a threshold range, utilize activity attributes can distinguish the object that flies upward, such as paper or plastic sheeting etc.; 4) behavior type characteristic, to stealing the burglar of engineering truck, its behavior type and passer-by are different, tending in commit a crime course near engineering truck or contacting engineering truck has long a period of time, if want the burglar who steals car load can want to enter pilothouse, can around these parts, unload the action of this project car parts for the burglar of the parts of wanting to steal engineering truck from engineering truck; 5) multi-track target property from visual angle, belongs to and found the multiple goal moving object in monitoring range when situation is committed a crime by clique, and clique's pilferage behavior that engineering truck more likely takes place is described; Therefore utilize the various attributes of the object in the above-mentioned activity to carry out the object coupling, the result according to the object coupling is weighted calculating then, obtains a comprehensive judged result, and the size of the quantized value of last this comprehensive judged result of basis is carried out different processing.
Described area size determined property is that each connected region that above-mentioned mark is crossed is obtained its area Si, and following judgment rule is arranged:
If Si<threshold value 1, then this region of variation is a noise spot;
If Si>threshold value 2, then this region of variation is large-area variation, at first consider it is because the variation that the irradiation of light produces, but can not get rid of the people can carry some article, and therefore at this moment the big or small factor of influence Fs of setting regions is between 0.2 ~ 0.5;
If threshold value 1<Si<threshold value 2, then this region of variation is suspicious for the people is arranged, and at this moment setting regions size factor of influence Fs is 1.
The size of the span of threshold value 1 and threshold value 2 is according to seeing that from top to bottom (overlooking) adult average cross-section about 0.12m2, decides the size of threshold value 1 and threshold value 2, i.e. the size of pixel value then by the calibration result of fully-directional visual system.
It is each connected region that above-mentioned mark is crossed to be obtained its area Si ask its shape facility attribute to reach coupling with reference picture on geometric relationship again that described shape attribute is judged, the standard of coupling is to make the similarity of two width of cloth images reach maximum; Improve processing capability in real time in order to simplify to calculate, way concrete among the present invention is: manikin is simplified rectangular model, at first ask (length of horizontal direction) mean breadth of each connected region and (length of vertical direction) highly, mean breadth wi is used in the width average that is divided into 4 equal portions of height hi direction, and with this mean breadth wi and rectangle of height hi work, use formula (30) to calculate the area ratio of certain connected region and the rectangle of this connected region then
ϵ area i = S i w i * h i - - - ( 30 )
Calculate the ε of gained Area iValue is then carried out the calculating of width wi with the ratio of height hi of following rectangle, ε with formula (31) between 0.5 ~ 0.9 Area iValue was just got rid of (not thinking to have the people) with this connected region less than 0.5 o'clock,
ϵ rate i = w i h i - - - ( 31 )
Calculate resulting ε Rate iValue is divided several regional radiuses according to spatial relation, and there is its judge index in each district in the radius more, such as in the zone radius scope of 10m ~ 12m, and ε Rate iValue is between 0.15 ~ 0.4, and setting shape attribute factor of influence Fsh is 1.
Described activity characteristic is judged, owing to adopted fully-directional visual system among the present invention, the ratio of people in whole scene is little, therefore people's motion model can be simplified with simple rigid motion model, so can be with people's mass motion speed or acceleration as an important judge index.
Center of gravity by more adjacent two frame targets changes, and just can obtain direction of motion, movement velocity and the acceleration of target.As the center of gravity of a certain target when the t frame at (xcg (t), ycg (t)), the center of gravity of a certain target is at (xcg (t+1) during the t+1 frame, ycg (t+1)), then direction of motion can be used (dx=xcg (t+1)-xcg (t), dy=ycg (t+1)-ycg (t)) confirms that movement velocity can be calculated by following formula
V t = dx 2 + dy 2 / Δt - - - ( 32 )
Acceleration of motion can be calculated by the velocity amplitude that (32) formula is tried to achieve,
a t=V t-V t-1/Δt (33)
Δ t is the time interval between two frames in the formula, and Vt is the movement velocity from the t+1 frame to the t frame, and at is the acceleration of t frame during the moment.
The center of gravity of described subject object can obtain by the X of aforementioned calculation resulting connected region area Si and this connected region, the accumulation calculated for pixel values of Y direction, and computing formula is calculated by formula (34),
X cg ( i ) = ∑ x x , y ∈ S i S i ; Y cg ( i ) = ∑ y x , y ∈ S i S i - - - ( 34 )
People's movement velocity and acceleration all have a threshold range, if surpassing this threshold range just thinks and is not the movement velocity and the acceleration that are produced by human body self, at this moment setting kinetic characteristic factor of influence Fmove is 0, be starkly lower than this threshold range (as this threshold value 1/10), setting kinetic characteristic factor of influence Fmove is between 0.2 ~ 0.5, and other set kinetic characteristic factor of influence Fmove is 1;
Described behavior type characteristic is judged, to stealing the burglar of engineering truck, its behavior type and passer-by are different, tending in commit a crime course near engineering truck or contacting engineering truck has long a period of time, if for wanting the burglar who steals car load can want that the burglar who enters pilothouse can manage to enter around pilothouse, can around these parts, unload the action of this project car parts from engineering truck for the burglar of the parts of wanting to steal engineering truck, therefore (certain parts of pilothouse or engineering truck) find to have the movable certain time interval T during that also surpasses of people in monitoring range, just thinking has suspicion of theft, growth along with near the residence time Tduring engineering truck, just think that the possibility of suspicion of theft increases, among the present invention behavior type factor of influence Fbehavior is defined as Tduring positive correlation, calculate by formula (35) with computing formula
The behavior type characteristic judge be based upon that shape attribute is judged and area size determined property basis on, in other words, the behavior type characteristic judges it is to carry out on shape attribute judgement and area size determined property result are people's basis;
Described multi-track target property, be to consider around night, a plurality of tracking targets appeared at engineering truck simultaneously, to show clique's pilferage behavior that engineering truck more likely takes place, the big and weight than other vehicles of engineering truck from volume and weight, especially to the possibility of the parts clique crime of stealing engineering truck just than higher, therefore among the present invention multi-track target factor of influence is stolen an important indicator of engineering truck incident as judgement, if discovery has two or more moving object target also to satisfy the shape attribute judgement simultaneously and area size determined property result is under people's the situation, Fgruop is set at 1 with multi-track target factor of influence.
On the basis that above-mentioned five kinds of attributes or characteristic information are judged, judge comprehensively then that reducing the disconnected rate of erroneous judgement weighted comprehensive judges to calculate and carry out that comprehensive judgment formula is provided by formula (36) in module 32, comprehensively adopted weighting scheme in judging,
W guard?alarm=K s×F s+K sh×F sh+K move×F move+K behavior×F behavior+K group×F group)×F s
(36)
In the formula:
K sWeighting coefficient for subject object area attribute.
K ShWeighting coefficient for the subject object shape attribute.
K MoveWeighting coefficient for the subject object movement properties.
K BehaviorWeighting coefficient for subject object behavior type characteristic.
K GroupWeighting coefficient for the multi-object target property.
The W that calculates according to formula (36) Guard alarmThe result, at first to make following different output result according to the varying in size of quantized value;
If K Attention≤ W Guard alarm≤ K alarm1, then be judged as suspicious intrusion, point out, system automatically by alarm module 33 send SMS message, voice call or email notification monitor staff be by the network validation image, startup image data file memory module 18 record live video data, managerial personnel can continue observation still from newly beginning calculating by network selecting in this case;
If K alarm1<W Guard alarm≤ K alarm2, then steal difficult early warning, by alarm module 33 send SMS message, voice call or email notification monitor staff be by the network validation image, and requires the scene to confirm, starts image data file memory module 18 record live video data;
If K alarm2<W Guard alarmExcept above-mentioned action, device is wanted automatic informing public security organ 110, the packets of information of circular contains the place (number-plate number, car owner) of warning, this information leaves in the storage of subscriber data information 36, if there is geographic position information system in public security organ, when having possessed location, following function, not only rescue mission is timely, can also implement to chase after stifled exactly to vehicle.
Described microprocessor 15 is embedded systems, and the implementation algorithm in the present embodiment is realized by Java language.
Embodiment 2
Whether the invention effect that the above embodiments 1 are produced is to swing by the suspended swinging object that the vision-based detection while of engineering truck exterior circumferential is detected on the engineering truck again, the suspended swinging object be placed in omnibearing vision sensor near, the car stealer can make the suspended swinging object produce swing or up-down vibration when moving engineering truck when the outer parts of theft car body (as tire etc.) or with other modes, therefore finds by omnibearing vision sensor whether the suspended swinging object has swing or the detection vibrated possesses antitheft effect too.
Embodiment 3
The invention effect that the above embodiments 1 are produced can be used as the aid that the engineering truck operator broadens one's vision.Owing to reasons such as itself present engineering truck layouts, field of front vision or back visibibility blind area are excessive for the engineering truck operator, and the generation of human casualty accident is often arranged.Therefore, be necessary the engineering truck field range is expanded, the further visual field performance of betterment works vehicle is to improve engineering truck the travel security and the operating efficiency of operation.Especially for the remote control engineering vehicle, the operator comes the engineering truck of scene end is carried out the remote driving operation by the picture of observing the video monitoring system monitor, if the monitor picture displayed only is one " window " of field scene, the operator will adjust the supervision orientation of camera lens every now and then could carry out the remote driving operation to engineering truck, just can make the remote control engineering vehicle possess better panoramic view performance if omnibearing vision sensor is placed in the top of engineering truck, can obviously improve the visual field performance of remote control engineering vehicle.
Embodiment 4
The invention effect that the above embodiments 1 are produced is to obtain monitoring omnidirectional images by the lighting unit on the engineering truck in the wavelength of visible light scope, equally also can use infrared emission unit in the wavelength coverage of infrared light, to obtain infrared monitoring omnidirectional images, the transmitting range of infrared emission unit will cover the monitoring range of whole engineering truck, can be used as monochromatic light for the Flame Image Process in the infrared monitoring omnidirectional images and handles.
The invention effect that the above embodiments 1,2 are produced is by omnibearing computer vision sensor, the network communications technology, image processing techniques and detect in the engineering truck means such as suspended swinging object provides engineering car anti-theft alarm system a kind of reliable and economic quick and precisely, that technical precaution and people's air defense are combined closely.
The invention effect that the above embodiments 3 are produced is by omnibearing computer vision sensor, the network communications technology, and means such as image processing techniques can obviously be improved the visual field performance of remote control engineering vehicle at the engineering truck duration of work.

Claims (10)

1, a kind of engineering car anti-theft alarm system based on omnidirectional computer vision, it is characterized in that: described engineering car anti-theft alarm system comprises microprocessor, be used to monitor the omnibearing vision sensor of the antitheft situation of engineering truck, be used for and extraneous communication module of communicating by letter, the output of described omnibearing vision sensor is connected with microprocessor, described omnibearing vision sensor comprises the evagination catadioptric minute surface in order to object in the reflection monitoring field, in order to the dark circles cone that prevents that anaclasis and light are saturated, transparent cylinder, camera, described evagination catadioptric minute surface is positioned at the top of transparent cylinder, evagination catadioptric minute surface down, the dark circles cone is fixed on the center of catadioptric minute surface male part, camera faces toward the evagination mirror surface up, and described camera is positioned at the virtual focus position of evagination mirror surface;
Described microprocessor comprises:
The view data read module is used to read the video image information of coming from the omnibearing vision sensor biography;
The image data file memory module, the video image information that is used for reading is kept at storage unit by file mode;
The transducer calibration module is used for the parameter of omnibearing vision sensor is demarcated, and sets up the material picture in space and the corresponding relation of the video image that is obtained;
The image stretching processing module, the circular video image that is used for reading expands into the panorama histogram;
The motion obj ect detection module, present frame live video image and a relatively stable reference image of being used for being obtained carry out the difference computing, and the computing formula of image subtraction is represented suc as formula (28):
f d(X,t 0,t i)=f(X,t i)-f(X,t 0) (28)
In the following formula, f d(X, t 0, t i) be to photograph the result who carries out image subtraction between image and reference image in real time; F (X, t i) be to photograph image in real time; F (X, t 0) be the reference image;
And with in the present image with the image subtraction computing formula of adjacent K frame shown in (29):
f d(X,t i-k,t i)=f(X,t i)-f(X,t i-k) (29)
In the following formula, f d(X, t I-k, t i) be to photograph the result who carries out image subtraction between image and adjacent K two field picture in real time; F (X, t I-k) image when being adjacent K frame;
As f d(X, t 0, t i) 〉=threshold value, f d(X, t I-k, t iWhen) 〉=threshold value is set up, be judged to be the motion object;
As f d(X, t 0, t i) 〉=threshold value, f d(X, t I-k, t i)<threshold value is judged stationary objects, and upgrades replacement reference image with formula (30):
f ( X , t 0 ) ⇐ f ( X , t i - k ) - - - ( 30 )
As f d(X, t 0, t i)<threshold value is judged to be stationary objects;
The connected region computing module, be used for present image is carried out mark, pixel grey scale is that 0 sub-district represents in this sub-district unmanned movable, pixel grey scale is that 1 this sub-district of expression has people's activity, whether the pixel of calculating in the present image equates with the pixel of some points adjacent around the current pixel, equate to be judged as gray scale and have connectedness, all are had connective pixel as a connected region; And then calculate its area and center of gravity according to the connected region of being tried to achieve; People's center of gravity obtains by the X that calculates resulting connected region area Si and this connected region, the accumulation calculated for pixel values of Y direction, and computing formula is calculated by formula (34),
X cg ( i ) = Σ x , y ∈ S i x S i ; Y cg ( i ) = Σ x , y ∈ S i S i - - - ( 34 ) ;
Manikin is set up module, be used for setting up manikin by the summit of connected region qualification rectangle and the center of gravity of target, module is distributed a sign ID number automatically during for new detected subject object, human body in the motion is along with the visual angle change of omnibearing vision sensor, corresponding variation can take place in its size and shape, dynamically revises manikin;
Area size determined property module is used for obtaining its area Si according to each connected region, and following judgment rule is arranged:
If Si<threshold value 1, then this region of variation is a noise spot;
If Si>threshold value 2, then this region of variation is large-area variation, at first consider it is because the variation that the irradiation of light produces, but can not get rid of the people can carry some article, and therefore at this moment the big or small factor of influence Fs of setting regions is between 0.2~0.5;
If threshold value 1<Si<threshold value 2, then this region of variation is suspicious for the people is arranged, and at this moment setting regions size factor of influence Fs is 1;
The size of the span of threshold value 1 and threshold value 2 is according to seeing that an adult average cross-section about 0.12m2, decides the size of threshold value 1 and threshold value 2, i.e. the size of pixel value then by the calibration result of fully-directional visual system from top to bottom;
The shape attribute judge module, be used for obtaining its area Si and ask its shape facility attribute according to each connected region, make comparisons with manikin, at first ask the mean breadth and the height of each connected region, mean breadth wi is used in the width average that is divided into 4 equal portions of height hi direction, and make a rectangle with this mean breadth wi and height hi, use formula (30) to calculate the area ratio of certain connected region and the rectangle of this connected region then
ϵ area i = S i w i * h i - - - ( 30 )
Calculate the ε of gained Area iValue is then carried out the calculating of width wi with the ratio of height hi of following rectangle, ε with formula (31) between 0.5~0.9 Area iValue was just got rid of this connected region less than 0.5 o'clock,
ϵ rate i = w i h i - - - ( 31 )
Calculate resulting ε Rate iValue is divided several regional radiuses according to spatial relation, and there is its judge index in each district in the radius more, such as in the zone radius scope of 10m~12m, and ε Rate iValue is between 0.15~0.4, and setting shape attribute factor of influence Fsh is 1;
The activity characteristic judge module, be used for changing by the center of gravity of more adjacent two frame targets, just can obtain direction of motion, movement velocity and the acceleration of target, as the center of gravity of a certain target when the t frame at (xcg (t), ycg (t)), the center of gravity of a certain target is at (xcg (t+1) during the t+1 frame, ycg (t+1)), then direction of motion can be confirmed with (dx=xcg (t+1)-xcg (t), dy=ycg (t+1)-ycg (t)), movement velocity is calculated by following formula
V t = dx 2 + dy 2 / Δt - - - ( 32 )
The velocity amplitude that acceleration of motion is tried to achieve by (32) formula calculates,
a t = V t - V t - 1 / Δt - - - ( 33 )
Δ t is the time interval between two frames in the formula, and Vt is the movement velocity from the t+l frame to the t frame, and at is the acceleration of t frame during the moment,
If people's movement velocity and acceleration surpass threshold range, judge it is not movement velocity and the acceleration that produces by human body self, setting kinetic characteristic factor of influence Fmove is 0, be starkly lower than this threshold range, setting kinetic characteristic factor of influence Fmove is between 0.2~0.5, and other set kinetic characteristic factor of influence Fmove is 1;
Behavior type characteristic judge module, be used to be recorded in and find to have the movable certain time interval T during that also surpasses of people in the monitoring range, just thinking has suspicion of theft, growth along with near the residence time Tduring engineering truck, just think that the possibility of suspicion of theft increases, Fbehavior is defined as Tduring positive correlation with the behavior type factor of influence, is calculated by formula (35) with computing formula
Figure A2006100516830004C3
Multi-track target property judge module, being used to judge has two or more moving object target, and satisfying shape attribute judgement and area size determined property result simultaneously is under people's the situation, and Fgruop is set at 1 with multi-track target factor of influence;
Comprehensive judging treatmenting module is used on the basis of five kinds of above-mentioned judgements, judges comprehensively that then comprehensive judgment formula is provided by formula (36) with the disconnected rate of minimizing erroneous judgement, has adopted weighting scheme in the comprehensive judgement,
W guard?alarm=K s×F s+K sh×F sh+K move×F move+(K behavior×F behavior+K group×F group)×F s
(36)
In the formula:
K sWeighting coefficient for subject object area attribute;
K ShWeighting coefficient for the subject object shape attribute;
K MoveWeighting coefficient for the subject object movement properties;
K BehaviorWeighting coefficient for subject object behavior type characteristic;
K GroupWeighting coefficient for the multi-object target property;
The abnormal alarm module is used for the W that calculates according to formula (36) Guard alarmThe result, as W Guard alarmGreater than preset threshold value, send warning information by communication module.
2, the engineering car anti-theft alarm system based on omnidirectional computer vision as claimed in claim 1 is characterized in that: described predetermined threshold value comprises Kattention, K alarm1, K alarm2, Kattention<K alarm1<K alarm2;
If Kattention≤W Guard alarm≤ K alarm1 has been judged to be suspicious intrusion, points out, and starts image data file memory module record live video data;
If K alarm1<W Guard alarm≤ K alarm2, be judged to be and steal difficult early warning, by communication module send SMS message, voice call or email notification monitor staff be by the network validation image, and requires the scene to confirm, starts image data file memory module record live video data;
If K alarm2<W Guard alarm, be judged to be and steal difficult warning, automatic informing public security organ 110, the packets of information of circular contains the place of warning, the number-plate number, owner information.
3, the engineering car anti-theft alarm system based on omnidirectional computer vision as claimed in claim 1 or 2 is characterized in that: described catadioptric minute surface designs in order to following method in order to reach undeformed on the monitoring level direction:
The coordinate of undeformed horizontal coordinate that requires the scene object point and corresponding picture point is linear on the horizontal direction;
d(ρ)=αρ (1)
ρ is and the distance of the face shape central point of catoptron in the formula (1), and α is the magnification of imaging system; If the normal that catoptron is ordered at M and the angle of Z axle are γ, the angle of incident ray and Z axle is Ф, and the angle of reflection ray and Z axle is θ; Then
tg ( x ) = d ( x ) - x z ( x ) - h - - - ( 2 )
tgγ = dz ( x ) dx - - - ( 3 )
tg ( 2 γ ) = 2 dz ( x ) dx 1 - d 2 z ( x ) dx 2 - - - ( 4 )
tgθ = ρ f = x z ( x ) - - - ( 5 )
By reflection law
2γ=φ-θ (6)
tg ( 2 γ ) = tg ( φ - θ ) = tgφ - tgθ 1 + tgφtgθ
Obtain the differential equation (7) by formula (2), (4), (5) and (6)
d 2 z ( x ) dx 2 + 2 k dz ( x ) dx - 1 = 0 - - - ( 7 )
In the formula; k = z ( x ) [ z ( x ) - h ] + x [ d ( x ) - x ] z ( x ) [ d ( x ) - x ] + x [ z ( x ) - h ] - - - ( 8 )
Obtain the differential equation (9) by formula (7)
dz ( x ) dx + k - k 2 + 1 = 0 - - - ( 9 )
Obtain formula (10) by formula (1), (5)
d ( x ) = afx z ( x ) - - - ( 10 )
By formula (8), (9), (10) and starting condition, separate the digital solution that the differential equation can obtain reflecting mirror surface shape; Select suitable camera according to application requirements during the refractive and reflective panorama system design, calibrate Rmin, the focal distance f of lens is determined the distance H o of catoptron from camera, calculates aperture of a mirror Do by (1) formula; Determining of systematic parameter:
Determine systematic parameter af according to the visual field of using desired short transverse; Obtain formula (11) by formula (1), (2) and (5), done some simplification here, with z (x) ≈ z 0, main consideration is smaller with respect to the change in location of minute surface and camera for the height change of minute surface;
tgφ = ( af - z 0 ) ρ f z 0 - h - - - ( 11 )
With the inconocenter point largest circumference place in the center of circle as the plane ρ = R min → ω max = R min f Corresponding visual field is ф max.Then can obtain formula (12);
ρ f = ( z 0 - h ) tg φ max ω max + z 0 - - - ( 12 )
4, the engineering car anti-theft alarm system based on omnidirectional computer vision as claimed in claim 3, it is characterized in that: described microprocessor also comprises the background maintenance module, described background maintenance module comprises:
The background luminance computing unit is used to calculate average background brightness Yb computing formula as the formula (25):
Y ‾ b = Σ x = 0 W - 1 Σ y = 0 H - 1 Y n ( x , y ) ( 1 - M n ( x , y ) ) Σ x = 0 W - 1 Σ y = 0 H - 1 ( 1 - M n ( x , y ) ) - - - ( 25 )
In the formula (25), Yn (x y) is the brightness of each pixel of present frame, Mn (x y) is the mask table of present frame, and described mask table is to write down each pixel with one with the measure-alike array M of frame of video whether motion change is arranged, referring to formula (27):
Figure A2006100516830007C5
Yb0 is the background luminance of former frame when being judged to be the motion object, and Yb1 is the background luminance of first frame when being judged to be the motion object, being changed to of two frame mean flow rates:
ΔY=Yb1-Yb0 (26)
If Δ Y, then thinks the incident of turning on light that taken place greater than higher limit; If Δ Y, then thinks the incident of turning off the light that taken place less than certain lower limit; Between higher limit and lower limit, think then that light changes naturally as Δ Y;
The background adaptive unit is used for carrying out adaptive learning according to following formula (22) when light changes naturally:
X mix,bn+1(i)=(1-λ)X mix,bn(i)+λX mix,cn(i) (22)
In the formula: X Mix, cn(i) be present frame RGB vector, X Mix, bn(i) be present frame background RGB vector, X Mix, bn+1(i) be next frame background forecast RGB vector, λ is the speed of context update; Changeless background (initial background) is used in λ=0; Present frame is used as a setting in λ=1; 0<λ<1, background is mixed by the background and the present frame of previous moment;
When light is caused that by switch lamp background pixel is reset according to present frame, referring to formula (23):
X mix,bn+1(i)=X mix,cn(i) (23)。
5, the engineering car anti-theft alarm system based on omnidirectional computer vision as claimed in claim 4, it is characterized in that: described microprocessor also comprises:
Noise is rejected module, is used for the average displacement of each pixel value with all values in its local neighborhood, as shown in Equation (16)
h[i,j]=(1/M)∑f[k,1] (32)
In the following formula (32), M is the pixel sum in the neighborhood.
6, the engineering car anti-theft alarm system based on omnidirectional computer vision as claimed in claim 5 is characterized in that: described image stretching processing module is used for according to a point (x on the circular omnidirectional images *, y *) and rectangle column panorama sketch on a point (x *, y *) corresponding relation, set up (x *, y *) and (x *, y *) mapping matrix, shown in the formula (21):
P **(x **,y **)← M× P *(x *,y *) (21)
In the following formula, M is a mapping matrix, P *(x *, y *) be the picture element matrix on the circular omnidirectional images, P *(x *, y *) be the picture element matrix on the rectangle column panorama sketch.
7, the engineering car anti-theft alarm system based on omnidirectional computer vision as claimed in claim 1 or 2, it is characterized in that: described microprocessor also comprises:
Network transmission module, the live video image that is used for being obtained is gone out by netcast in the mode of video flowing, so that the user can grasp field condition in real time by diverse network;
The real-time play module, the live video image that is used for being obtained is played to display device by this module.
8, the engineering car anti-theft alarm system based on omnidirectional computer vision as claimed in claim 7, it is characterized in that: installation is used to detect the suspended swinging object of whether swinging on the described engineering truck, described suspended swinging object be placed in omnibearing vision sensor near, described microprocessor is the involving vibrations detection module also, be used for producing swing or up-down vibration according to car stealer's suspended swinging object, it is difficult that judgement may take place to steal.
9, the anti-theft alarm system for vehicles based on omnidirectional computer vision as claimed in claim 7, it is characterized in that: described engineering truck is provided with the lighting unit that obtains monitoring omnidirectional images in the wavelength of visible light scope, and described lighting unit is ordinary lamps source or infrared lamp source.
10, the anti-theft alarm system for vehicles based on omnidirectional computer vision as claimed in claim 7 is characterized in that: described omnibearing vision sensor is installed in the center of top position of engineering truck.
CN200610051683A 2006-05-26 2006-05-26 Engineering car anti-theft alarm system based on omnibearing computer vision Expired - Fee Related CN1858551B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200610051683A CN1858551B (en) 2006-05-26 2006-05-26 Engineering car anti-theft alarm system based on omnibearing computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200610051683A CN1858551B (en) 2006-05-26 2006-05-26 Engineering car anti-theft alarm system based on omnibearing computer vision

Publications (2)

Publication Number Publication Date
CN1858551A true CN1858551A (en) 2006-11-08
CN1858551B CN1858551B (en) 2010-05-12

Family

ID=37297477

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200610051683A Expired - Fee Related CN1858551B (en) 2006-05-26 2006-05-26 Engineering car anti-theft alarm system based on omnibearing computer vision

Country Status (1)

Country Link
CN (1) CN1858551B (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101123722B (en) * 2007-09-25 2010-12-01 北京智安邦科技有限公司 Panorama video intelligent monitoring method and system
CN101571982B (en) * 2009-05-11 2011-04-13 宁波海视智能系统有限公司 Method for judging stolen articles in video monitoring range
CN101296302B (en) * 2007-04-26 2011-11-09 佳能株式会社 Information processing apparatus and method
CN101809629B (en) * 2007-09-28 2012-11-21 富士通天株式会社 Drive recorder and setting method for the same
CN104539890A (en) * 2014-12-18 2015-04-22 苏州阔地网络科技有限公司 Target tracking method and system
CN104670162A (en) * 2013-11-29 2015-06-03 现代摩比斯株式会社 Method and system for vehicle intrusion detection
CN104794084A (en) * 2014-01-21 2015-07-22 罗伯特·博世有限公司 Method for the efficient transmission of data
CN105141927A (en) * 2015-09-22 2015-12-09 浙江吉利汽车研究院有限公司 Vehicle-mounted image monitoring device
CN105245822A (en) * 2014-06-24 2016-01-13 陈凯柏 Lighting device controlled by image recognition system
CN106991414A (en) * 2017-05-17 2017-07-28 司法部司法鉴定科学技术研究所 A kind of method that state of motion of vehicle is obtained based on video image
CN108108132A (en) * 2016-10-17 2018-06-01 泉州泉港璟冠信息科技有限公司 A kind of vehicle
CN108275114A (en) * 2018-02-27 2018-07-13 苏州清研微视电子科技有限公司 A kind of Security for fuel tank monitoring system
CN109204114A (en) * 2017-06-29 2019-01-15 长城汽车股份有限公司 The projective techniques and device of vehicle greeting lamp
CN109849788A (en) * 2018-12-29 2019-06-07 北京七鑫易维信息技术有限公司 Information providing method, apparatus and system
CN110191322A (en) * 2019-06-05 2019-08-30 重庆两江新区管理委员会 A kind of video monitoring method and system of shared early warning
CN110493572A (en) * 2019-08-21 2019-11-22 深圳市合隆智慧城市服务有限公司 Smart city monitoring system based on image recognition
CN112702569A (en) * 2020-12-17 2021-04-23 贵州创想宏宇科技有限公司 Intelligent monitoring system
TWI760580B (en) * 2018-11-20 2022-04-11 遠創智慧股份有限公司 License plate image obtaining method and system using the same
CN114399537A (en) * 2022-03-23 2022-04-26 东莞先知大数据有限公司 Vehicle tracking method and system for target personnel
CN116708724A (en) * 2023-08-07 2023-09-05 江苏省电子信息产品质量监督检验研究院(江苏省信息安全测评中心) Sample monitoring method and system based on machine vision
CN116872885A (en) * 2023-09-07 2023-10-13 江西科技学院 Intelligent automobile anti-theft method, system and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1435337A (en) * 2002-12-27 2003-08-13 陈涛 Antitheft and antirob networking alarm method and system for motor vehicle
CN2643415Y (en) * 2003-07-09 2004-09-22 江国栋 Video moving target inbreak alarmer
CN1655198A (en) * 2005-01-20 2005-08-17 唐汇淑 Fire proof burglary protection alarm system with image storage and transmission function

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101296302B (en) * 2007-04-26 2011-11-09 佳能株式会社 Information processing apparatus and method
CN101123722B (en) * 2007-09-25 2010-12-01 北京智安邦科技有限公司 Panorama video intelligent monitoring method and system
CN101809629B (en) * 2007-09-28 2012-11-21 富士通天株式会社 Drive recorder and setting method for the same
CN101571982B (en) * 2009-05-11 2011-04-13 宁波海视智能系统有限公司 Method for judging stolen articles in video monitoring range
CN104670162A (en) * 2013-11-29 2015-06-03 现代摩比斯株式会社 Method and system for vehicle intrusion detection
CN104794084A (en) * 2014-01-21 2015-07-22 罗伯特·博世有限公司 Method for the efficient transmission of data
CN104794084B (en) * 2014-01-21 2019-04-02 罗伯特·博世有限公司 Method for efficiently transmitting data
CN105245822A (en) * 2014-06-24 2016-01-13 陈凯柏 Lighting device controlled by image recognition system
CN104539890A (en) * 2014-12-18 2015-04-22 苏州阔地网络科技有限公司 Target tracking method and system
CN105141927A (en) * 2015-09-22 2015-12-09 浙江吉利汽车研究院有限公司 Vehicle-mounted image monitoring device
CN108108132A (en) * 2016-10-17 2018-06-01 泉州泉港璟冠信息科技有限公司 A kind of vehicle
CN108108132B (en) * 2016-10-17 2020-07-14 陈爱娟 Vehicle with a steering wheel
CN106991414A (en) * 2017-05-17 2017-07-28 司法部司法鉴定科学技术研究所 A kind of method that state of motion of vehicle is obtained based on video image
CN109204114A (en) * 2017-06-29 2019-01-15 长城汽车股份有限公司 The projective techniques and device of vehicle greeting lamp
CN108275114A (en) * 2018-02-27 2018-07-13 苏州清研微视电子科技有限公司 A kind of Security for fuel tank monitoring system
CN108275114B (en) * 2018-02-27 2020-06-23 苏州清研微视电子科技有限公司 Oil tank anti-theft monitoring system
TWI760580B (en) * 2018-11-20 2022-04-11 遠創智慧股份有限公司 License plate image obtaining method and system using the same
CN109849788A (en) * 2018-12-29 2019-06-07 北京七鑫易维信息技术有限公司 Information providing method, apparatus and system
CN110191322A (en) * 2019-06-05 2019-08-30 重庆两江新区管理委员会 A kind of video monitoring method and system of shared early warning
CN110191322B (en) * 2019-06-05 2021-06-22 重庆两江新区管理委员会 Video monitoring method for sharing early warning
CN110493572A (en) * 2019-08-21 2019-11-22 深圳市合隆智慧城市服务有限公司 Smart city monitoring system based on image recognition
CN112702569A (en) * 2020-12-17 2021-04-23 贵州创想宏宇科技有限公司 Intelligent monitoring system
CN112702569B (en) * 2020-12-17 2022-07-22 广东城市保安服务有限公司 Intelligent monitoring system
CN114399537A (en) * 2022-03-23 2022-04-26 东莞先知大数据有限公司 Vehicle tracking method and system for target personnel
CN114399537B (en) * 2022-03-23 2022-07-01 东莞先知大数据有限公司 Vehicle tracking method and system for target personnel
CN116708724A (en) * 2023-08-07 2023-09-05 江苏省电子信息产品质量监督检验研究院(江苏省信息安全测评中心) Sample monitoring method and system based on machine vision
CN116708724B (en) * 2023-08-07 2023-10-20 江苏省电子信息产品质量监督检验研究院(江苏省信息安全测评中心) Sample monitoring method and system based on machine vision
CN116872885A (en) * 2023-09-07 2023-10-13 江西科技学院 Intelligent automobile anti-theft method, system and storage medium
CN116872885B (en) * 2023-09-07 2023-12-01 江西科技学院 Intelligent automobile anti-theft method, system and storage medium

Also Published As

Publication number Publication date
CN1858551B (en) 2010-05-12

Similar Documents

Publication Publication Date Title
CN1858551A (en) Engineering car anti-theft alarm system based on omnibearing computer vision
CN1812569A (en) Intelligent safety protector based on omnibearing vision sensor
CN1912950A (en) Device for monitoring vehicle breaking regulation based on all-position visual sensor
CN1306452C (en) Monitor, monitoring method and programm
CN1852428A (en) Intelligent tunnel safety monitoring apparatus based on omnibearing computer vision
CN1607452A (en) Camera unit and apparatus for monitoring vehicle periphery
CN1306243C (en) Method of measuring object and system for measuring object
CN1943824A (en) An automatic fire fighting unit based on omnibearing visual sensor
CN1804927A (en) Omnibearing visual sensor based road monitoring apparatus
CN101059909A (en) All-round computer vision-based electronic parking guidance system
CN101051223A (en) Air conditioner energy saving controller based on omnibearing computer vision
CN2622672Y (en) 3-D monitor
CN1812570A (en) Vehicle antitheft device based on omnibearing computer vision
CN101032405A (en) Safe driving auxiliary device based on omnidirectional computer vision
CN1212724C (en) Image synthesizing device and method
CN1344470A (en) Image processing device and monitoring system
CN1547726A (en) Method for monitoring a moving object and system regarding same
CN1851338A (en) Central air conditioner energy-saving control device based on omnibearing computer vision
CN100538757C (en) Fire-disaster monitoring device based on omnibearing vision sensor
CN1782668A (en) Method and device for preventing collison by video obstacle sensing
CN1637578A (en) Camera unit and apparatus for monitoring vehicle periphery
CN1607818A (en) Image processing device, operation supporting device, and operation supporting system
CN1317124A (en) Visual device
CN1931697A (en) Intelligent dispatcher for group controlled lifts based on image recognizing technology
CN1787605A (en) Control system, apparatus compatible with the system, and remote controller

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20100512

Termination date: 20130526