CN101552910B - Remnant detection device based on comprehensive computer vision - Google Patents

Remnant detection device based on comprehensive computer vision Download PDF

Info

Publication number
CN101552910B
CN101552910B CN2009100970717A CN200910097071A CN101552910B CN 101552910 B CN101552910 B CN 101552910B CN 2009100970717 A CN2009100970717 A CN 2009100970717A CN 200910097071 A CN200910097071 A CN 200910097071A CN 101552910 B CN101552910 B CN 101552910B
Authority
CN
China
Prior art keywords
article
over
carrier
scene
formula
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2009100970717A
Other languages
Chinese (zh)
Other versions
CN101552910A (en
Inventor
汤一平
陈耀宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN2009100970717A priority Critical patent/CN101552910B/en
Publication of CN101552910A publication Critical patent/CN101552910A/en
Application granted granted Critical
Publication of CN101552910B publication Critical patent/CN101552910B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

A lave detection device based on comprehensive computer vision includes a vision sensor for monitoring security situation in large-scale panorama and a microprocessor for detecting lave according to signals of the vision sensor. The vision sensor is a comprehensive vision sensor and a high-speed fast ball snap sensor, the microprocessor includes an ODVS image acquisition module, a detection modulefo objects in scene, for extracting moving objects in the monitoring scene, lave temporary stationary object and scene stationary object, a left events detection module for detecting state of every e vent left in the scene, including a left event detection ID number automatic generation unit, a lave carrier snap unit, a lave carrier tracking unit, a lave space position judgement unit, a cauculation unit for starionary time the lave experienced, a lave state judgement unit. The invention has comprehensive vision, good reliability and excellent real-time.

Description

Legacy checkout gear based on omnidirectional computer vision
Technical field
The present invention relates to the application aspect intelligent security of optical technology and computer vision technique, specifically be applicable to interior legacy checkout gear on a large scale.
Background technology
People or car are detained near the zone, just may mean that suspect object invades awaiting a favorable opportunity to put suspect article or prepare, and may represent that perhaps suspect object is in the validity of detecting presently used video monitoring system.Under the relatively severe state of anti-terrorism situation, to the suspicious place of leaving over the detection of object and influence public safety in the Waiting Lounge at airport and station, railway platform, gymnasium, exhibition center etc. become must detection content.
Chinese invention patent publication number CN101231696 discloses a kind of legacy detection method and system, and its legacy detection method may further comprise the steps: detect the foreground image that is different from background, determine the foreground pixel point, and foreground pixel is put characterization; Persistent state to the characterization of foreground pixel point is carried out timing, extracts the pixel that timing reaches preset value; Detect the connected region that forms by the pixel that extracts, be defined as target to be analyzed; Whether the kinetic characteristic of evaluating objects is legacy to confirm.This patent also discloses a kind of legacy detection system simultaneously, comprise object detecting device, to pixel timing and duration of extracting the characterization state reach the pixel of preset value the timing processing unit, detect the connected region that forms by the pixel that extracts with the connected region checkout gear of definite target to be analyzed, determine whether target to be analyzed is the target analysis device of legacy.
Above-mentioned legacy detection method and system exist several of main problems: 1) because the visual range of camera head is limited, the legacy that detect than the large scene scope must adopt a plurality of camera heads; 2) will obtain the foreground pixel point accurately in complex scene is not easy thing, is subjected to the interference of environment extremely easily; 3) timing to foreground pixel point will spend bigger computational resource in each two field picture, real-time and precision that influence detects; 4) owing to do not consider the appearance of the motion object of video image and temporary transient stationary objects, the time series that incident takes place, can cause higher false detection rate; 5) will detect legacy is to be done by whom hardly may.
Summary of the invention
The deficiency of, poor reliability limited for the visual range that overcomes existing legacy detection system, real-time difference the invention provides a kind of have omni-directional visual, good reliability, the real-time legacy checkout gear based on omnidirectional computer vision.
The technical solution adopted for the present invention to solve the technical problems is:
A kind of legacy checkout gear based on omnidirectional computer vision, comprise and be used to monitor the vision sensor of the interior security situation of panorama on a large scale, and be used for carrying out the microprocessor that legacy detects according to the signal of vision sensor, described vision sensor is connected with described microprocessor, described vision sensor is that omnibearing vision sensor and high speed quick are captured transducer, described omnibearing vision sensor comprises support, transparent housing, a catadioptric mirror, secondary catadioptric mirror, shooting part camera lens and wide-angle lens, described transparent housing, catadioptric mirror and shooting part camera lens are rack-mount, a described catadioptric mirror is positioned at the top of described transparent housing, described transparent housing bottom center opening, described shooting part camera lens is positioned at the top of a described catadioptric mirror, described secondary catadioptric mirror is installed in the described central opening, aperture is opened at the middle part of described secondary catadioptric mirror, described wide-angle lens, described shooting part camera lens are installed in described aperture, a catadioptric mirror, the central shaft arrangement of secondary catadioptric mirror and wide-angle lens is on same axial line; With the focal length of shooting part camera lens as the focal length of f1, wide-angle lens as the distance of the focus of f2, shooting part camera lens and shooting part camera lens as S1, focal length from the shooting part camera lens to first imaging point as S2, distance from wide-angle lens to first imaging point as S3, the distance of point obtains following relational expression as S4 according to the imaging formula of camera lens from the wide-angle lens to the material object:
1 f 1 = 1 S 1 + 1 S 2 - - - ( 10 )
1 f 2 = 1 S 3 + 1 S 4 - - - ( 11 )
d=S2+S3 (12)
With between shooting part camera lens and the wide-angle lens apart from d as a constraints, by the design wide-angle lens focal length f2 satisfy the requirement of formula (12);
Shooting part camera lens and wide-angle lens are considered that as a compound lens its focal distance f is represented by following formula:
1 f = ( f 1 + f 2 - d ) f 1 * f 2 - - - ( 13 )
In addition, as D, its multiplication factor is represented by following formula with the diameter of process shot:
n = D f - - - ( 14 )
Process shot satisfies following formula:
n = D f = 2 θ 1 max - - - ( 15 )
In the formula (15), θ 1maxIt is the maximum angle of secondary reflection light V3 and catadioptric main shaft Z; Described microprocessor comprises:
The ODVS image collection module is used to obtain the full-view video image of monitoring scene, includes system initialization unit and image collection module;
Obj ect detection module in the scene is used to extract temporary transient stationary objects of motion object, legacy and scene stationary objects in the monitoring scene, includes:
Long period background modeling unit is used for less turnover rate background pixel being carried out modeling based on mixture gaussian modelling by adopting;
Short period background modeling unit is used for bigger turnover rate background pixel being carried out modeling based on mixture gaussian modelling by adopting;
Shade suppresses the unit, be used for active zone componental movement object and motion shade, detection method based on the hsv color space, in the hsv color space, H representation in components colourity, S representation in components saturation, the brightness of V representation in components, by with making policy decision formula (20), judge whether certain pixel belongs to shade:
In the formula, I V(x, y), I S(x, y), I H(x, y) and B V(x, y), B S(x, y), B H(x, and y) expression respectively (x, (x is y) with background pixel value H, S, V component y) to locate the new input value I of pixel; Parameter 0<α<β<1, the α value will be considered the intensity of shade, and when the shade that throws on the background was strong more, parameter alpha will obtain more little, and parameter beta is used for the robustness of enhanced system; Parameter τ SLess than zero, parameter τ HChoose then and mainly debug according to the situation of scene;
The object discrimination unit is used to extract temporary transient stationary objects of motion object, legacy and scene stationary objects in the monitoring scene, with resulting processing intermediate object program B in described long period background modeling unit and the described short period background modeling unit LAnd B SDiffer from and subtract computing, obtain long period prospect F L, short period prospect F S, then according to F LAnd F SRelation between the value, four kinds of summarizing interior certain pixel of present frame are dissimilar, and result of determination is as shown in table 1:
F L(x,y) F S(x,y) Type of prediction
Class1 1 1 The motion object
Type 2 1 0 Leave over the temporary transient stationary objects of article
Type 3 0 1 Random noise, background change
Type 4 0 0 The scene static object
Table 1
According to the judgement of table 1, Class1 belongs to the motion object, and type 2 belongs to leaves over the temporary transient stationary objects of article, and type 4 belongs to the scene static object;
Leave over event checking module, be used to detect the various states of leaving over incident that in scene, take place, include:
Leave over automatic generation unit event detection ID number, be used for when the carrier that leave over article with leave over article and begin to separate constantly, system can produce one automatically and leave over event detection ID number and generate simultaneously one with this file of leaving over event detection ID number name, is used to deposit the suspicious close-up image and the video image of leaving over incident of leaving over the carrier of article;
The carrier who leaves over article captures the unit, be used for when the carrier who leaves over article when leaving over article and begin to separate constantly, obtain leaving over carrier's positional information (S of article from described object discrimination cell processing result x, S y), then according to the mapping table of in the system initialization unit, being set up, export one and carrier's positional information (S x, S y) corresponding digital ID gives the control control port of high speed quick, indication high speed quick is captured this position of transducer rotational alignment and is taken, and obtains to leave over the carrier's of article close-up image;
Leave over carrier's tracking cell of article, be used to adopt the track of following the tracks of the carrier who leaves over article based on the track algorithm of KALMAN filtering;
Leave over article locus state judging unit, be used to confirm to leave over the state that article and the carrier who leaves over article are separated; When the carrier who leaves over article when leaving over article and begin to separate constantly, obtain the positional information (O of legacy x, O y), constantly read the carrier's who from carrier's tracking cell of leaving over article, is calculated positional information (S then x', S y'), calculate the distance between these two points then, computational methods are by shown in the formula 25,
d = ( S x ′ - O x ) 2 + ( S y ′ - O y ) 2 - - - ( 25 )
Leave over computing unit quiescent time that article experience, be used to calculate as the carrier who leaves over article and be carved into the current time interval when article begin to separate with leaving over, computational methods be from the carrier that leaves over article with leave over article and begin to separate and constantly begin to obtain the computer system time T Start, obtaining the current computer systems time T described leaving over after the state judgment unit judges finishes then Now, leave over the quiescent time that article experience and can calculate by formula 26,
t=T Now-T Start (26)
In the formula, t leaves over the quiescent time that article experience, T StartFor from the carrier that leaves over article with leave over article and begin to separate system time constantly, T NowBe the time of present system;
Leave over the state judging unit, be used for confirming whether leave over article really is in the state of leaving over: according to leaving over carrier that article locus state judging unit calculated and whether surpassed the distance of defined or complete obiteration in scene from the distance of leaving over article described, and leave over computing unit quiescent time that article experience and calculate the time whether article have surpassed defined independent quiescent time of leaving over described, make the various synthetic determinations of leaving over state and caution; Legacy synthetic determination table is as shown in table 2;
Type Range index (d) Time index (t) Result of determination
Class1 Surpass the distance D or the complete obiteration of defined Surpass the T of defined quiescent time Confirm as legacy
Type 2 No motion object in the scope of the distance D of defined Surpass the T of defined quiescent time Confirm as legacy
Type 3 The motion object is arranged in the scope of the distance D of defined, but the slope of motion motion of objects trajectory and motion object to the line slope of legacy poor>± Φ Surpass the T of defined quiescent time Confirm as legacy
Type 4 Situation except Class1~3 range indexs Surpass the T of defined quiescent time Suspicious legacy
Type 5 Meet Class1~3 range indexs Do not surpass the T of defined quiescent time Suspicious legacy
Table 2.
Further, omnibearing vision sensor satisfies the indeformable requirement of full-view video image of whole monitoring field top view, designs in order to following method:
According to image-forming principle, the angle of incident ray V1 and catadioptric main shaft Z is Φ, and the angle of primary event light V2 and catadioptric main shaft Z is θ 2, cross P 1Point (t 1, F 1) tangent line and the angle of t axle be σ, the angle of normal and Z axle is ε; The angle of secondary reflection light V3 and catadioptric main shaft Z is θ 1, cross P 2Point (t 2, F 2) tangent line and the angle of t axle be σ, the angle of normal and Z axle is ε 1, can obtain formula (1) based on above-mentioned relation:
Wherein tan φ = t 1 F 1 ( t 1 - s ) , tan θ 2 = t 1 - t 2 F 2 - F 1 , tan θ 1 = t 2 F 2
In the formula, F 1Be a catadioptric minute surface curve, F 2It is secondary catadioptric minute surface curve;
Utilize triangle relation and simplify arrangement, obtain formula (2), (3):
F 12-2αF 1′-1=0 (2)
F 22-2βF 2′-1=0 (3)
In the following formula, σ = ( F 1 - s ) ( F 2 - F 1 ) - t 1 ( t 1 - t 2 ) t 1 ( F 2 - F 1 ) - ( t 1 - t 2 ) ( F 1 - s ) β = t 2 ( t 1 - t 2 ) + F 2 ( F 2 - F 1 ) t 2 ( F 2 - F 1 ) - F 2 ( t 1 - t 2 )
Solution formula (2), (3) can obtain formula (4), (5);
F 1 ′ = α ± α 2 + 1 - - - ( 4 )
F 2 ′ = β ± β 2 + 1 - - - ( 5 )
In the formula: F 1' be F 1The differential of curve, F 2' be F 2The differential of curve;
Between point on the imaging plane and the point on the horizontal plane, satisfy certain linear relationship, with the distance of viewpoint S be C and with the perpendicular horizontal plane L of Z axle on arbitrfary point P, the pixel p that a correspondence is arranged on imaging plane, with the coordinate polar coordinate representation on the horizontal plane, at this moment the arbitrfary point P (r on the horizontal plane L, z) represent with following formula
r=C*tanφ,z=s+C (6)
Have average resolution rate ODVS on the design level face, i.e. the indeformable ODVS of horizontal direction, the coordinate r of arbitrfary point P on horizontal plane L and the perpendicular direction of Z axle and pixel p and Z axle apart from t 2/ F 2(t 2) between to guarantee to have linear relationship; Make following formula to set up,
r=a*f*t 2/F 2(t 2)+b (7)
There is following relation to set up according to image-forming principle, incidence angle formula (8) expression,
tan φ = t 1 F 1 - s - - - ( 8 )
With formula (6), (8) substitution formula (7) and arrangement, obtain indeformable in the horizontal direction condition, with formula (9) expression,
t 2 = F 2 ( t 2 ) a * f ( t 1 F 1 ( t 1 ) - s - b ) - - - ( 9 )
The minute surface curve design of satisfying formula (9) meets the requirement of horizontal direction average resolution rate.
Further again, described transparent housing is bowl-shape, and transparent housing comprises the cone on top and the semi-round ball of bottom, the radius of described semi-round ball and cone transition.
Further, in described carrier's tracking cell of leaving over article, it is certain setting the blanking time of handling between the two frame consecutive images, is designated as Δ t, and the present position of suppose a last moment foreground target object is (S x, S y), the present position of current time foreground target object is (S x', S y'), the speed V of current foreground target object on X-axis x, the speed on Y-axis is V y, the relation of residing position of current time foreground target object and the residing position of a last moment foreground target object is shown in formula (21) so:
S x ′ = S x + V x × Δt S y ′ = S y + V y × Δt - - - ( 21 )
In the formula, S x, S yBe x of living in, the y position of a last moment foreground target object, S x', S y' be x of living in, the y position of current time foreground target object, Δ t is the time interval between two frames, V xBe the speed of current foreground target object on X-axis, V yBe the speed of current foreground target object on Y-axis;
Speed to the foreground target object predicts that setting its acceleration on X-axis is a x, the acceleration on Y-axis is a y, set up a system mode: { V x, V y, a x, a y, getting system control amount U (k) is 0, state-transition matrix is:
A = 1 0 Δt 0 0 1 0 Δt 0 0 1 0 0 0 0 1 - - - ( 22 )
Measured value is the velocity amplitude of foreground target object, and note measured value transfer matrix is:
H = 1 0 0 0 0 1 0 0 - - - ( 23 )
Suppose that W (k) and V (k) are zero-mean and noise vector independently, then establish its covariance to be:
Q = 0.01 0 0 0 0 0.01 0 0 0 0 0.01 1 0 0 0 0.01 R = 0.01 0.01 0.01 0.01 - - - ( 24 )
Record foreground target object motion speed then, and set an error covariance initial value, constantly from last one speed state of the foreground target object motion state that dopes current time constantly, realize predicting tracing by KALMAN filtering.
Technical conceive of the present invention is: the omnibearing vision sensor ODVS that developed recently gets up (OmniDirectional Vision Sensors) provide a kind of new solution for the panoramic picture that obtains scene in real time.The characteristics of ODVS are looking away (360 degree), can become piece image to the Information Compression in the hemisphere visual field, and the amount of information of piece image is bigger; When obtaining a scene image, the riding position of ODVS in scene is free more; ODVS is without run-home during monitoring environment; Algorithm is simpler during moving object in the detection and tracking monitoring range; Can obtain the realtime graphic of scene.This ODVS video camera mainly is made up of a ccd video camera and a reflective mirror that faces camera.Reflective mirror reflects the image in one week of horizontal direction to the ccd video camera imaging, like this, just can obtain the environmental information of 360 ° of horizontal directions in piece image.This omnidirectional vision camera has very outstanding advantage, under the real-time processing requirements to panorama, is a kind of quick, approach of visual information collection reliably especially.
This ODVS video camera can be at the comprehensive all situations that photographs in the hemisphere visual field, and this omni-directional visual is a kind of typical machine vision, is that the people can not possess.The principle of ODVS camera acquisition image and the principle of eye-observation object are different, the image difference that makes omnidirectional images and human eye see is also very big, therefore how for providing a kind of interior visual information in field of monitoring fast, reliably on a large scale, the intelligent security field to gather approach by comprehensive optical image technology and computer vision technique, and the real-time omnidirectional images that obtains according to the ODVS video camera, whether have object to be retained in a certain zone (doubting explosive) by calculating can to detect effectively quickly and accurately, and determine when, who leave over.Noise filter function makes it guarantee very high accuracy rate in the weather of very busy scene and variation.And can notify the monitor staff to note the hazard event that might take place by various means.
Set up a macrocyclic background model and a short-period background model respectively by modeling method based on the mixture gaussian modelling of two different update rates, then the real time panoramic image that is obtained is differed from macrocyclic background model, short-period background model respectively and subtract computing, then operation result is carried out judging based on the plausibility function of statistical information, obtain the result of determination of motion object, temporary transient stationary objects and stationary background object; Consider that then the appearance of the motion object of video image and temporary transient stationary objects, time series that incident takes place detect the legacy in comprehensive scene; Capture article carrier's close-up image at last by the high speed quick, determine that legacy is when, left over by whom.
Beneficial effect of the present invention mainly shows: 1, have omni-directional visual, good reliability, real-time; 2, easy to maintenance, maintenance cost is low; 3, be used in the safety detection of large-scale occasions such as airport, subway, stadiums, public place.
Description of drawings
Fig. 1 is the structural representation of the omnibearing vision sensor at no dead angle;
Fig. 2 is the captured video image schematic diagram of omnibearing vision sensor;
Fig. 3 is the optical schematic diagram that shooting part camera lens and wide-angle lens make up;
The ODVS key diagram of Fig. 4 for designing by secondary catadioptric principle and horizontal direction average resolution rate;
The imaging plane projection theory figure of Fig. 5 for designing by horizontal direction average resolution rate;
Fig. 6 utilizes 4 rank Runge-Kutta algorithms to ask the catadioptric minute surface curve chart of the digital solution of F1 and F2;
Fig. 7 is the legacy detection method schematic diagram based on omnidirectional computer vision;
Fig. 8 is the legacy checkout gear hardware structure diagram based on omnidirectional computer vision;
Fig. 9 is for judging the flow chart of foreground object and background object;
Figure 10 is the information fusion schematic diagram between ODVS and the high speed quick;
Figure 11 is the legacy checkout gear software block diagram based on omnidirectional computer vision.
Embodiment
Below in conjunction with accompanying drawing the present invention is further described.
With reference to Fig. 1~Figure 11, the key problem of legacy Video Detection is to detect carrier's independent time that article is stayed and surpassed permission in can not placing the zone of any article of article.Embodiment of the present invention are: at first, adopt the indeformable omnibearing vision sensor of no dead angle horizontal direction, its distinctive 360 ° visual field can solve the locus that occurs in the article scape on the scene and the uncertain problem of time left over effectively; Secondly, proposed a kind of modeling method of the mixture gaussian modelling based on two different update rates, can detect legacy and the legacy carrier who temporarily enters panoramic view effectively; Once more, can't active zone componental movement object and the shortcoming of displacement shade in order to overcome mixture gaussian modelling, introduce shade and suppressed the unit; At last, separately article are stayed and surpassed the time that allows to judge whether it belongs to legacy according to the carrier of article, accompanying drawing 7 has been represented a kind of legacy state that the carrier of article stays article separately.
As shown in Figure 8, legacy checkout gear based on omnidirectional computer vision includes omnibearing vision sensor, high speed quick and computer, described omnibearing vision sensor is used to obtain the interior full-view video image of monitoring scene, described high speed quick is used to capture the close-up image of legacy carrier in the monitoring scene, and described computer is used for detecting whether legacy has taken place in monitoring scene;
As shown in Figure 11, described computer includes ODVS image collection module, the interior subject detecting unit of scene and leaves over event checking module;
Described ODVS image collection module is used to obtain the full-view video image of monitoring scene, includes system initialization module and image collection module; Subject detecting unit is used to extract temporary transient stationary objects of motion object, legacy and the scene stationary objects in the monitoring scene in the described scene, includes long period background modeling unit, short period background modeling unit, shade inhibition unit and object discrimination unit; The described event checking module of leaving over, be used to detect the various states of leaving over incident that in scene, take place, the carrier who include and leave over event detection ID number automatic generation unit, leaves over article captures the unit, leave over carrier's tracking cell of article, leave over article locus state judging unit, leave over that article experience quiescent time computing unit and leave over the state judging unit, finally leaving over the state judging unit can be according to these states output corresponding warning;
Described system initialization module, be mainly used in the fusion of topography's information of panoramic video information that ODVS obtains and high speed quick, the fusion of these two kinds of information is to realize by setting up coordinate in the ODVS image and the mapping relations between the high speed quick rotational angle information, promptly by preset point customization method full-view video image is divided into zonule one by one, dividing method is seen accompanying drawing 10; Set up a kind of mapping relations between the candid photograph point with each zonule and control high speed quick then; As long as to the digital ID of the correspondence of control port of high speed quick, the high speed quick just can automatically forward this zonule to and capture when capturing the close-up image of some zonules like this; These functions are to realize by the mapping table between an ODVS and the high speed quick, read this mapping table during system initialization, also can allow the user to rebulid new mapping table simultaneously;
Described image collection module is mainly used in the full-view video image that reads ODVS by video card or other video interfaces; The full-view video image that is obtained is exported to the interior subject detecting unit of scene and is handled;
Described long period background modeling unit, be used for less turnover rate background pixel being carried out modeling based on mixture gaussian modelling by adopting, be mixed Gaussian background modeling algorithm, can both be detected as foreground object by the computing campaign object of this algorithm and temporarily static object;
Described short period background modeling unit, be used for bigger turnover rate background pixel being carried out modeling based on mixture gaussian modelling by adopting, be mixed Gaussian background modeling algorithm, can both be detected as background object by temporary transient static object and the scene static object of the computing of this algorithm;
Described mixed Gaussian background modeling algorithm, its main thought are each pixel to be adopted the hybrid representation of a plurality of Gauss models.If be used for describing total K of each Gaussian Profile of putting distribution of color, to the time dependent sequence { Y of each pixel 1, Y 2...., Y n, come to be its modeling with mixed Gauss model.Define the new probability formula of the pixel value of current observation station, as shown in Equation (16):
P ( Y t ) = Σ n k w n , t × η ( Y t , μ n , t , σ n , t 2 ) - - - ( 16 )
In the formula: t is a time point; K is the quantity (generally getting 3~5) of Gauss model; ω N, tBe the weights of n Gauss model; μ N, tAnd σ N, t 2Be respectively average and variance at t moment n Gauss model; Wherein probability density function is as shown in Equation 17:
η ( Y t , μ n , t , σ n , t 2 ) = 1 2 π e - ( Y t - μ ) 2 σ 2 - - - ( 17 )
Each Gaussian Profile has different weights and priority respectively, and they are always according to priority order ordering from high to low.Get suitable background weights threshold interval, only preceding several within this threshold value distribute and just are considered to background distributions, and other then is that prospect distributes.So, when detecting the foreground point, according to priority order with pixel Y tMate one by one with each Gaussian Profile, if do not represent the Gaussian Profile and the Y of background distributions tCoupling is judged that then this point is the foreground point, otherwise is background dot;
Described less turnover rate and bigger turnover rate, be meant the update coefficients α of formula (18), less turnover rate is meant that update coefficients α has less value, and bigger turnover rate is meant that update coefficients α has bigger value, the span of update coefficients: 0<α<1
μ = ( 1 - α ) × μ + α × Y σ 2 = ( 1 - α ) × σ 2 + α × ( Y - μ ) 2 - - - ( 18 )
In the formula, μ and σ 2Be respectively the average and the variance of Gauss model, α is a update coefficients, and Y is a pixel.
Described shade suppresses the unit, be used for active zone componental movement object and motion shade, here introduced shade inhibition unit in order to improve the Device Testing precision, the shadow Detection algorithm is combined with mixture gaussian modelling, for the pixel value It that newly enters, judge that according to mixed Gaussian background distributions model this pixel is background or sport foreground, formula judges whether it belongs to the motion shade if prospect then further adopts the shade decision-making.Particular flow sheet as shown in Figure 9.
Described shadow Detection algorithm adopts the detection method based on the hsv color space, and the hsv color space more can detect shade near the mode of human visual perception color exactly than RGB color space.In the hsv color space, H representation in components colourity, S representation in components saturation, the brightness of V representation in components.Because the brightness and the saturation of shaded side are dark partially than background, and colourity is consistent substantially, so can judge whether certain pixel belongs to shade by with making policy decision formula (20).
In the formula, I V(x, y), I S(x, y), I H(x, y) and B V(x, y), B S(x, y), B H(x, (x, (x is y) with background pixel value H, S, V component y) to locate the new input value I of pixel in y) expression respectively.Parameter 0<α<β<1, the α value will be considered the intensity of shade, and when the shade that throws on the background was strong more, parameter alpha will obtain more little, and parameter beta is used for the robustness of enhanced system.Parameter τ SLess than zero, parameter τ HChoose then and mainly debug according to the situation of scene.
Described object discrimination unit is used to extract temporary transient stationary objects of motion object, legacy and scene stationary objects in the monitoring scene; At first be to read full-view video image, then respectively with described long period background modeling unit and described short period background modeling unit in resulting processing intermediate object program B LAnd B SDiffer from and subtract computing, obtain long period prospect (F L, Foreground Long), short period prospect (F S, ForegroundShort), then according to F LAnd F SRelation between the value, four kinds of summarizing interior certain pixel of present frame are dissimilar, and result of determination is as shown in table 1:
Table 1 is according to F LAnd F SValue is judged the type of certain pixel
F L(x,y) F S(x,y) Type of prediction
Class1 1 1 The motion object
Type 2 1 0 Leave over the temporary transient stationary objects of article
Type 3 0 1 Random noise, background change
Type 4 0 0 The scene static object
According to the judgement of table 1, Class1 belongs to the motion object, and type 2 belongs to leaves over the temporary transient stationary objects of article, and type 4 belongs to the scene static object;
Describedly leave over automatic generation unit event detection ID number, be used for leaving over the incident name, when the carrier who leaves over article with leave over article and begin to separate constantly, system can produce one automatically and leave over event detection ID number and generate simultaneously one with this file of leaving over event detection ID number name, is used to deposit the suspicious close-up image and the video image of leaving over incident of leaving over the carrier of article;
Described leaving over event detection ID number, being used to produce one can identify the incident of leaving over, storage this is left over the major key of videos such as incident is relevant and view data, the naming rule of leaving over event detection ID number is: YYYYMMDDHHMMSS* names with 14 bit signs, and YYYY-represents the year of the Gregorian calendar; MM-represents the moon; DD-represents day; HH-represents hour; MM-represents branch; SS-represents second; Automatically produce by the system for computer time;
The described carrier who leaves over article captures the unit, is used to capture the close-up image of leaving over the article carrier and it is kept in the file of leaving over event detection ID number name; When the carrier who leaves over article when leaving over article and begin to separate constantly, obtain leaving over carrier's positional information (S of article from described object discrimination cell processing result x, S y), then according to the mapping table of in described system initialization module, being set up, export one and carrier's positional information (S x, S y) corresponding digital ID gives the control control port of high speed quick, this position of indication high speed quick rotational alignment is taken, and with the carrier's that obtains to leave over article close-up image, the schematic diagram of capturing close-up image is as shown in Figure 10;
Described carrier's tracking cell of leaving over article is used to follow the tracks of the carrier's who leaves over article track; Adopt track algorithm among the present invention based on KALMAN filtering, KALMAN filtering is to be the optimum criterion of estimating with the least mean-square error, seek the algorithm that a cover recursion is estimated, its basic thought is: the state-space model that adopts signal and noise, utilize last one constantly the estimated value and the measured value of current time to upgrade estimation, obtain the estimated value of current time state variable;
Adopt based on KALMAN filtering carry out destination object when following the tracks of, just can calculate current goal motion of objects state as long as get access to the speed of destination object.Suppose that be certain the blanking time of handling between the two frame consecutive images, be designated as Δ t, suppose that the present position of a last moment foreground target object is (S x, S y), the present position of current time foreground target object is (S x', S y'), the speed V of current foreground target object on X-axis x, the speed on Y-axis is V y, the relation of residing position of current time foreground target object and the residing position of a last moment foreground target object is shown in formula 21 so:
S x ′ = S x + V x × Δt S y ′ = S y + V y × Δt - - - ( 21 )
In the formula, S x, S yBe x of living in, the y position of a last moment foreground target object, S x, S yBe x of living in, the y position of current time foreground target object, Δ t is the time interval between two frames, V xBe the speed of current foreground target object on X-axis, V yBe the speed of current foreground target object on Y-axis;
As long as therefore can dope the speed of foreground target object, just can follow the tracks of to the foreground target object.For the speed of foreground target object is predicted, suppose that its acceleration on X-axis is a x, the acceleration on Y-axis is a y, just can set up a system mode: { V x, V y, a x, a y.Getting system control amount U (k) is 0.The note state-transition matrix is:
A = 1 0 Δt 0 0 1 0 Δt 0 0 1 0 0 0 0 1 - - - ( 22 )
And in the present invention, measured value is the velocity amplitude of foreground target object, therefore remembers that the measured value transfer matrix is:
H = 1 0 0 0 0 1 0 0 - - - ( 23 )
Suppose that W (k) and V (k) are zero-mean and noise vector independently, then can establish its covariance to be:
Q = 0.01 0 0 0 0 0.01 0 0 0 0 0.01 0 0 0 0 0.01 R = 0.01 0.01 0.01 0.01 - - - ( 24 )
Record foreground target object motion speed then, and set an error covariance initial value, just can constantly dope the motion state of current time by KALMAN filtering, to realize predicting tracing from the speed state of a last moment foreground target object;
The described article locus state judging unit of leaving over is used to confirm to leave over the state that article and the carrier who leaves over article are separated; When the carrier who leaves over article when leaving over article and begin to separate constantly, obtain the positional information (O of legacy x, O y), constantly read the carrier's who from carrier's tracking cell of leaving over article, is calculated positional information (S then x', S y'), calculate the distance between these two points then, computational methods are by shown in the formula 25,
d = ( S x ′ - O x ) 2 + ( S y ′ - O y ) 2 - - - ( 25 )
Describedly leave over computing unit quiescent time that article experience, be used to calculate as the carrier who leaves over article and be carved into the current time interval when article begin to separate with leaving over, computational methods be from the carrier that leaves over article with leave over article and begin to separate and constantly begin to obtain the computer system time T Start, obtaining the current computer systems time T described leaving over after the state judgment unit judges finishes then Now, leave over the quiescent time that article experience and can calculate by formula 26,
t=T Now-T Start (26)
In the formula, t leaves over the quiescent time that article experience, T StartFor from the carrier that leaves over article with leave over article and begin to separate system time constantly, T NowBe the time of present system;
The described state judging unit of leaving over is used for confirming whether leave over article really is in the state of leaving over; According to leaving over the carrier that calculated in the state judging unit of article locus and whether surpassed the distance of defined or complete obiteration in scene from the distance of leaving over article described, and leave over computing unit quiescent time that article experience and calculate the situations such as time whether article have surpassed defined independent quiescent time of leaving over described, make the various synthetic determinations of leaving over state and caution; Legacy synthetic determination table is as shown in table 2;
Table 2 legacy synthetic determination table
Type Range index (d) Time index (t) Result of determination
Class1 Surpass the distance D or the complete obiteration of defined Surpass the T of defined quiescent time Confirm as legacy
Type 2 No motion object in the scope of the distance D of defined Surpass the T of defined quiescent time Confirm as legacy
Type 3 There is motion right in the scope of the distance D of defined Surpass the T of defined quiescent time Confirm as legacy
Resemble, but the slope of motion motion of objects trajectory and motion object to the line slope of legacy poor>± Φ
Type 4 Situation except Class1~3 range indexs Surpass the T of defined quiescent time Suspicious legacy
Type 5 Meet Class1~3 range indexs Do not surpass the T of defined quiescent time Suspicious legacy
Described omnibearing vision sensor, when be used for obtaining effectively leaving over article and be, where, complete full-view video image that who left over, need not have the design of the indeformable omnibearing vision sensor of dead angle horizontal direction;
Described horizontal direction is indeformable, need carry out the design of horizontal direction average resolution rate, to satisfy the indeformable requirement of full-view video image of whole monitoring field top view; Can ascribe the design of catadioptric minute surface curve in O D V S design to, as shown in Figure 4, the incident light V1 of a light source point P on the space is at principal reflection minute surface (t1, F 1) reflect on the point, reverberation V2 reflexes to secondary reflection minute surface (t2, F 2) reflect again on the point, reverberation V3 goes up imaging with the camera lens that angle θ 1 enters camera head at image unit (CCD or CMOS).
According to image-forming principle, the angle of incident ray V1 and catadioptric main shaft Z is Φ, and the angle of primary event light V2 and catadioptric main shaft Z is θ 2, cross P 1Point (t 1, F 1) tangent line and the angle of t axle be σ, the angle of normal and Z axle is ε; The angle of secondary reflection light V3 and catadioptric main shaft Z is θ 1, cross P 2Point (t 2, F 2) tangent line and the angle of t axle be σ, the angle of normal and Z axle is ε 1, can obtain formula (1) based on above-mentioned relation:
Wherein tan φ = t 1 F 1 ( t 1 - s ) , tan θ 2 = t 1 - t 2 F 2 - F 1 , tan θ 1 = t 2 F 2
In the formula, F 1Be a catadioptric minute surface curve, F 2It is secondary catadioptric minute surface curve; Utilize triangle relation and simplify arrangement, obtain formula (2), (3):
F 12-2αF 1′-1=0 (2)
f 22-2βF 2′-1=0 (3)
In the following formula, σ = ( F 1 - s ) ( F 2 - F 1 ) - t 1 ( t 1 - t 2 ) t 1 ( F 2 - F 1 ) - ( t 1 - t 2 ) ( F 1 - s ) β = t 2 ( t 1 - t 2 ) + F 2 ( F 2 - F 1 ) t 2 ( F 2 - F 1 ) - F 2 ( t 1 - t 2 )
Solution formula (2), (3) can obtain formula (4), (5);
F 1 ′ = α ± α 2 + 1 - - - ( 4 )
F 2 ′ = β ± β 2 + 1 - - - ( 5 )
In the formula: F 1' be F 1The differential of curve, F 2' be F 2The differential of curve;
Point on the described imaging plane and the relation between the point on the horizontal plane have certain linear relationship, with the distance of viewpoint S be C and with the perpendicular horizontal plane L of Z axle on arbitrfary point P, the pixel p that a correspondence is arranged on imaging plane, as shown in Figure 4, with the coordinate polar coordinate representation on the horizontal plane, at this moment (r z) can represent with following formula the arbitrfary point P on the horizontal plane L
r=C*tanφ,z=s+C (6)
In order to have average resolution rate ODVS on the design level face, i.e. the indeformable ODVS of horizontal direction, the coordinate r of arbitrfary point P on horizontal plane L and the perpendicular direction of Z axle and pixel p and Z axle apart from t 2/ F 2(t 2) between to guarantee to have linear relationship.Make following formula to set up,
r=a*f*t 2/F 2(t 2)+b (7)
There is following relation to set up according to image-forming principle, incidence angle formula (8) expression,
tan φ = t 1 F 1 - s - - - ( 8 )
With formula (6), (8) substitution formula (7) and arrangement, obtain indeformable in the horizontal direction condition, with formula (9) expression,
t 2 = F 2 ( t 2 ) a * f ( t 1 F 1 ( t 1 ) - s - b ) - - - ( 9 )
The minute surface curve design of satisfying formula (9) meets the requirement of horizontal direction average resolution rate;
Ask F by formula (2), (3), (9) being utilized 4 rank Runge-Kutta algorithms 1And F 2Digital solution, the catadioptric minute surface and the secondary catadioptric minute surface curve that calculate like this can be realized horizontal direction average resolution rate; Fig. 6 utilizes 4 rank Runge-Kutta algorithms to ask F 1And F 2The catadioptric minute surface curve chart of digital solution;
Design transparent housing 2 is in order to make transparent housing 2 can not produce the reflection interference light of inwall, as shown in Figure 1.Specific practice is transparent housing to be designed to bowl-shape, promptly is designed to semi-round ball, can avoid like this at transparent housing 2 the reflection interference light taking place, and the structure of ODVS as shown in Figure 1;
Described no dead angle ODVS, be used for overcoming that original ODVS is blocked by secondary catadioptric minute surface and the dead angle of causing, in design, among the present invention wide-angle lens is configured on the secondary catadioptric minute surface, the position of design wide-angle lens and definite wide-angle lens is a task of the present invention.Fig. 3 is the location diagram of shooting part camera lens and wide-angle lens.In Fig. 3 wide-angle lens is configured on the place ahead and secondary catadioptric minute surface of a catadioptric mirror, the central shaft arrangement of shooting part camera lens, wide-angle lens, catadioptric mirror and secondary catadioptric mirror is on same axial line; Circular hole imaging between wide-angle lens and shooting part camera lens by on catadioptric mirror is called first imaging point, this imaging point by the shooting part camera lens in the imaging of viewpoint place.Here with the focal length of shooting part camera lens as the focal length of f1, wide-angle lens as the distance of the focus of f2, shooting part camera lens and shooting part camera lens as S1, focal length from the shooting part camera lens to first imaging point as S2, distance from wide-angle lens to first imaging point as S3, the distance of point can obtain following relational expression as S4 according to the imaging formula of camera lens from the wide-angle lens to the material object:
1 f 1 = 1 S 1 + 1 S 2 - - - ( 10 )
1 f 2 = 1 S 3 + 1 S 4 - - - ( 11 )
d=S2+S3 (12)
Formula (12) is set up, and is the place configuration wide-angle lens of d with the shooting part distance of camera lens behind the first catadioptric minute surface among Fig. 3 just, just can obtain the shown wide-angle image in image middle part among Fig. 2; But be that wide-angle lens is configured on the second catadioptric minute surface among the present invention, therefore with between shooting part camera lens and the wide-angle lens apart from d as a constraints, have only focal length f2 to satisfy the requirement of formula (12) by designing wide-angle lens;
Further, among Fig. 3 shooting part camera lens and wide-angle lens being considered that as a compound lens its focal distance f can be represented by following formula:
1 f = ( f 1 + f 2 - d ) f 1 * f 2 - - - ( 13 )
In addition, as D, its multiplication factor can be represented by following formula with the diameter of process shot:
n = D f - - - ( 14 )
For the visual field of process shot and the dead angle part of ODVS are matched, when the design process shot, need satisfy following formula:
n = D f = 2 θ 1 max - - - ( 15 )
In the formula, θ 1maxIt is the maximum angle of secondary reflection light V3 and catadioptric main shaft Z; The image effect figure that the ODVS of the above-mentioned design of process takes out as shown in Figure 2, from single ODVS, eliminated the dead angle part of original ODVS, and add the design of the first catadioptric minute surface and the second catadioptric minute surface by the compound mode of shooting part camera lens and wide-angle lens, can cover the dead angle part of original ODVS effectively.
Aperture on the described first catadioptric minute surface, the first catadioptric minute surface, video camera, transparent housing, the second catadioptric minute surface, wide-angle lens are on same central axis; The camera lens of video camera is placed on the viewpoint position at the first catadioptric minute surface rear portion, as shown in Figure 1;
Described transparent housing is mainly used in and supports the first catadioptric minute surface, the second catadioptric minute surface, wide-angle lens and protect the first catadioptric minute surface and the second catadioptric minute surface is not subjected to the pollution of extraneous dust and influences catadioptric quality.
Embodiment 2
Sometimes in order to save system cost, whether only detect in monitoring scene legacy, can on hardware, dispense the high speed quick, on software, omit the correlation module of capturing, all the other are all identical in embodiment 1.
The invention effect that the above embodiments 1 and embodiment 2 are produced is to make that by omnibearing computer vision transducer the scope of safety monitoring is broader, provide a kind of brand-new, maintenance cost is low, easy to maintenance, judge more reliable, visual, intelligentized safety monitoring approach and means and device.Can be applicable to anti-terrorism safety detection such as airport, subway, stadiums, public place.

Claims (2)

1. legacy checkout gear based on omnidirectional computer vision, comprise and be used to monitor the vision sensor of the interior security situation of panorama on a large scale, and be used for carrying out the microprocessor that legacy detects according to the signal of vision sensor, described vision sensor is connected with described microprocessor, it is characterized in that: described vision sensor is that omnibearing vision sensor and high speed quick are captured transducer, described omnibearing vision sensor comprises support, transparent housing, a catadioptric mirror, secondary catadioptric mirror, shooting part camera lens and wide-angle lens, described transparent housing, catadioptric mirror and shooting part camera lens are rack-mount, a described catadioptric mirror is positioned at the top of described transparent housing, described transparent housing bottom center opening, described shooting part camera lens is positioned at the top of a described catadioptric mirror, described secondary catadioptric mirror is installed in the described central opening, aperture is opened at the middle part of described secondary catadioptric mirror, described wide-angle lens, described shooting part camera lens are installed in described aperture, a catadioptric mirror, the central shaft arrangement of secondary catadioptric mirror and wide-angle lens is on same axial line; With the focal length of shooting part camera lens as the focal length of f1, wide-angle lens as the distance of the focus of f2, shooting part camera lens and shooting part camera lens as S1, focal length from the shooting part camera lens to first imaging point as S2, distance from wide-angle lens to first imaging point as S3, the distance of point obtains following relational expression as S4 according to the imaging formula of camera lens from the wide-angle lens to the material object:
1 f 1 = 1 S 1 + 1 S 2 - - - ( 10 )
1 f 2 = 1 S 3 + 1 S 4 - - - ( 11 )
d=S2+S3(12)
With between shooting part camera lens and the wide-angle lens apart from d as a constraints, by the design wide-angle lens focal length f2 satisfy the requirement of formula (12);
Shooting part camera lens and wide-angle lens are considered that as a compound lens its focal distance f is represented by following formula:
1 f = ( f 1 + f 2 - d ) f 1 * f 2 - - - ( 13 )
In addition, as D, its multiplication factor is represented by following formula with the diameter of process shot:
n = D f - - - ( 14 )
Process shot satisfies following formula:
n = D f = 2 θ 1 max - - - ( 15 )
In the formula (15), θ 1maxIt is the maximum angle of secondary reflection light V3 and catadioptric main shaft Z; Described microprocessor comprises:
The ODVS image collection module is used to obtain the full-view video image of monitoring scene, includes system initialization unit and image acquisition unit;
Obj ect detection module in the scene is used to extract temporary transient stationary objects of motion object, legacy and scene stationary objects in the monitoring scene, includes:
Long period background modeling unit is used for less turnover rate background pixel being carried out modeling based on mixture gaussian modelling by adopting;
Short period background modeling unit is used for bigger turnover rate background pixel being carried out modeling based on mixture gaussian modelling by adopting;
Shade suppresses the unit, be used for active zone componental movement object and motion shade, detection method based on the hsv color space, in the hsv color space, H representation in components colourity, S representation in components saturation, the brightness of V representation in components, by with making policy decision formula (20), judge whether certain pixel belongs to shade:
In the formula, I V(x, y), I S(x, y), I H(x, y) and B V(x, y), B S(x, y), B H(x, and y) expression respectively (x, (x is y) with background pixel value H, S, V component y) to locate the new input value I of pixel; Parameter 0<α<β<1, the α value will be considered the intensity of shade, and when the shade that throws on the background was strong more, parameter alpha will obtain more little, and parameter beta is used for the robustness of enhanced system; Parameter τ SLess than zero, parameter τ HChoose then and mainly debug according to the situation of scene;
The object discrimination unit is used to extract temporary transient stationary objects of motion object, legacy and scene stationary objects in the monitoring scene, with resulting processing intermediate object program B in described long period background modeling module and the described short period background modeling module LAnd B SDiffer from and subtract computing, obtain long period prospect F L, short period prospect F S, then according to F LAnd F SRelation between the value, four kinds of summarizing interior certain pixel of present frame are dissimilar, and result of determination is as shown in table 1:
F L(x,y) F S(x,y) Type of prediction Class1 1 1 The motion object Type 2 1 0 Leave over the temporary transient stationary objects of article Type 3 0 1 Random noise, background change Type 4 0 0 The scene static object
Table 1
According to the judgement of table 1, Class1 belongs to the motion object, and type 2 belongs to leaves over the temporary transient stationary objects of article, and type 4 belongs to the scene static object;
Leave over event checking module, be used to detect the various states of leaving over incident that in scene, take place, include:
Leave over automatic generation unit event detection ID number, be used for when the carrier that leave over article with leave over article and begin to separate constantly, system can produce one automatically and leave over event detection ID number and generate simultaneously one with this file of leaving over event detection ID number name, is used to deposit the suspicious close-up image and the video image of leaving over incident of leaving over the carrier of article;
The carrier who leaves over article captures the unit, be used for when the carrier who leaves over article when leaving over article and begin to separate constantly, obtain leaving over carrier's positional information (S of article from described object discrimination cell processing result x, S y), then according to the mapping table of in the system initialization unit, being set up, export one and carrier's positional information (S x, S y) corresponding digital ID gives the control control port of high speed quick, indication high speed quick is captured this position of transducer rotational alignment and is taken, and obtains to leave over the carrier's of article close-up image;
Leave over carrier's tracking cell of article, be used to adopt the track of following the tracks of the carrier who leaves over article based on the track algorithm of KALMAN filtering;
Leave over article locus state judging unit, be used to confirm to leave over the state that article and the carrier who leaves over article are separated; When the carrier who leaves over article when leaving over article and begin to separate constantly, obtain the positional information (O of legacy x, O y), constantly read the carrier's who from carrier's tracking module of leaving over article, is calculated positional information (S then x', S y'), calculate the distance between these two points then, computational methods are by shown in the formula 25,
d = ( S x ′ - O x ) 2 + ( S y ′ - O y ) 2 - - - ( 25 )
Leave over computing unit quiescent time that article experience, be used to calculate as the carrier who leaves over article and be carved into the current time interval when article begin to separate with leaving over, computational methods be from the carrier that leaves over article with leave over article and begin to separate and constantly begin to obtain the computer system time T Start, judge that in the described condition judgment module of leaving over finishing the back is obtaining the current computer systems time T then Now, leave over the quiescent time that article experience and can calculate by formula 26,
t=T Now-T Start (26)
In the formula, t leaves over the quiescent time that article experience, T StartFor from the carrier that leaves over article with leave over article and begin to separate system time constantly, T NowBe the time of present system;
Leave over the state judging unit, be used for confirming whether leave over article really is in the state of leaving over: according to leaving over carrier that article locus condition judgment module calculated and whether surpassed the distance of defined or complete obiteration in scene from the distance of leaving over article described, and leave over computing module quiescent time that article experience and calculate the time whether article have surpassed defined independent quiescent time of leaving over described, make the various synthetic determinations of leaving over state and caution; Legacy synthetic determination table is as shown in table 2;
Table 2.
2. the legacy checkout gear based on omnidirectional computer vision as claimed in claim 1, it is characterized in that: in described carrier's tracking module of leaving over article, it is certain setting the blanking time of handling between the two frame consecutive images, be designated as Δ t, suppose that the present position of a last moment foreground target object is (S x, S y), the present position of current time foreground target object is (S x', S y'), the speed V of current foreground target object on X-axis x, the speed on Y-axis is V y, the relation of residing position of current time foreground target object and the residing position of a last moment foreground target object is shown in formula (21) so:
S x ′ = S x + V x × Δt S y ′ = S y + V y × Δt - - - ( 21 )
In the formula, S x, S yBe x of living in, the y position of a last moment foreground target object, S x', S y' be x of living in, the y position of current time foreground target object, Δ t is the time interval between two frames, V xBe the speed of current foreground target object on X-axis, V yBe the speed of current foreground target object on Y-axis;
Speed to the foreground target object predicts that setting its acceleration on X-axis is a x, the acceleration on Y-axis is a y, set up a system mode: { V x, V y, a x, a y, getting system control amount U (k) is 0, state-transition matrix is:
A = 1 0 Δt 0 0 1 0 Δt 0 0 1 0 0 0 0 1 - - - ( 22 )
Measured value is the velocity amplitude of foreground target object, and note measured value transfer matrix is:
H = 1 0 0 0 0 1 0 0 - - - ( 23 )
Suppose that W (k) and V (k) are zero-mean and noise vector independently, then establish its covariance to be:
Q = 0.01 0 0 0 0 0.01 0 0 0 0 0.01 0 0 0 0 0.01 R = 0.01 0.01 0.01 0.01 (24)
Record foreground target object motion speed then, and set an error covariance initial value, constantly from last one speed state of the foreground target object motion state that dopes current time constantly, realize predicting tracing by KALMAN filtering.
CN2009100970717A 2009-03-30 2009-03-30 Remnant detection device based on comprehensive computer vision Expired - Fee Related CN101552910B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009100970717A CN101552910B (en) 2009-03-30 2009-03-30 Remnant detection device based on comprehensive computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009100970717A CN101552910B (en) 2009-03-30 2009-03-30 Remnant detection device based on comprehensive computer vision

Publications (2)

Publication Number Publication Date
CN101552910A CN101552910A (en) 2009-10-07
CN101552910B true CN101552910B (en) 2011-04-06

Family

ID=41156848

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009100970717A Expired - Fee Related CN101552910B (en) 2009-03-30 2009-03-30 Remnant detection device based on comprehensive computer vision

Country Status (1)

Country Link
CN (1) CN101552910B (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101853552A (en) * 2010-04-20 2010-10-06 长春理工大学 Omnibearing non-blind area moving object detection method
CN102256104B (en) * 2010-05-20 2013-12-11 鸿富锦精密工业(深圳)有限公司 Hand-held device and method for dynamically monitoring specific area by using same
CN102063614B (en) * 2010-12-28 2015-06-03 天津市亚安科技股份有限公司 Method and device for detecting lost articles in security monitoring
KR101237970B1 (en) * 2011-01-17 2013-02-28 포항공과대학교 산학협력단 Image survailance system and method for detecting left-behind/taken-away of the system
DE102011112652A1 (en) * 2011-05-16 2012-11-22 Eads Deutschland Gmbh Image analysis for ordnance disposal and security controls
CN102509075B (en) * 2011-10-19 2013-07-24 北京国铁华晨通信信息技术有限公司 Remnant object detection method and device
JP5954106B2 (en) * 2012-10-22 2016-07-20 ソニー株式会社 Information processing apparatus, information processing method, program, and information processing system
CN103093427A (en) * 2013-01-15 2013-05-08 信帧电子技术(北京)有限公司 Monitoring method and monitoring system of personnel stay
CN103605983B (en) * 2013-10-30 2017-01-25 天津大学 Remnant detection and tracking method
CN104156939B (en) * 2014-04-17 2016-10-05 四川大学 A kind of remnant object detection method based on SOBS and GMM
CN104156942B (en) * 2014-07-02 2017-02-15 华南理工大学 Detection method for remnants in complex environment
CN104751483B (en) * 2015-03-05 2018-02-09 北京农业信息技术研究中心 A kind of monitoring method of warehouse logisticses robot work region abnormal conditions
JP6390860B2 (en) * 2016-01-25 2018-09-19 パナソニックIpマネジメント株式会社 Left object monitoring device, left object monitoring system including the same, and left object monitoring method
CN106846357A (en) * 2016-12-15 2017-06-13 重庆凯泽科技股份有限公司 A kind of suspicious object detecting method and device
SG11202000466PA (en) 2017-07-28 2020-02-27 Nec Corp Information processing apparatus, control method, and program
CN110232359B (en) * 2019-06-17 2021-10-01 中国移动通信集团江苏有限公司 Retentate detection method, device, equipment and computer storage medium
CN110717432A (en) * 2019-09-29 2020-01-21 上海依图网络科技有限公司 Article detection method and device and computer storage medium
CN110706227B (en) * 2019-10-14 2022-07-05 普联技术有限公司 Article state detection method, system, terminal device and storage medium
CN110751107A (en) * 2019-10-23 2020-02-04 北京精英系统科技有限公司 Method for detecting event of discarding articles by personnel

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101004538A (en) * 2007-01-18 2007-07-25 汤一平 Omnibearing vision sensor with no dead angle
JP2008097064A (en) * 2006-10-06 2008-04-24 Hitachi Omron Terminal Solutions Corp Residue detection method and transaction device
CN101231696A (en) * 2008-01-30 2008-07-30 安防科技(中国)有限公司 Method and system for detection of hangover
CN101289156A (en) * 2008-05-30 2008-10-22 浙江工业大学 Intelligent container sling based on omniberaing vision sensor

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008097064A (en) * 2006-10-06 2008-04-24 Hitachi Omron Terminal Solutions Corp Residue detection method and transaction device
CN101004538A (en) * 2007-01-18 2007-07-25 汤一平 Omnibearing vision sensor with no dead angle
CN101231696A (en) * 2008-01-30 2008-07-30 安防科技(中国)有限公司 Method and system for detection of hangover
CN101289156A (en) * 2008-05-30 2008-10-22 浙江工业大学 Intelligent container sling based on omniberaing vision sensor

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
李志华等.基于实时视觉分析算法的智能图像传感器系统设计.《传感技术学报》.2008,第21卷(第7期),1178-1183. *
汤一平等.宽动态全方位视觉传感器.《安防科技》.2008,3-6. *
汤一平等.智能全方位视觉传感器及其应用研究.《传感技术学报》.2007,第20卷(第6期),1316-1320. *

Also Published As

Publication number Publication date
CN101552910A (en) 2009-10-07

Similar Documents

Publication Publication Date Title
CN101552910B (en) Remnant detection device based on comprehensive computer vision
CN101533548B (en) Device for protecting property based on omnibearing computer visual sense
CN106980829B (en) Abnormal behaviour automatic testing method of fighting based on video analysis
Ellis Performance metrics and methods for tracking in surveillance
US8538082B2 (en) System and method for detecting and tracking an object of interest in spatio-temporal space
Porikli et al. Video surveillance: past, present, and now the future [DSP Forum]
US8532427B2 (en) System and method for image enhancement
CN101561270B (en) Embedded omnidirectional ball vision object detection and mobile monitoring system and method
CN102542492B (en) System and method for evaluating effect of visual advertisement
CN106408940A (en) Microwave and video data fusion-based traffic detection method and device
CN104378582A (en) Intelligent video analysis system and method based on PTZ video camera cruising
US20070058717A1 (en) Enhanced processing for scanning video
CN104166841A (en) Rapid detection identification method for specified pedestrian or vehicle in video monitoring network
CN103873825A (en) ATM (automatic teller machine) intelligent monitoring system and method
CN101366045A (en) Object density estimation in vedio
CN103069796A (en) Method for counting objects and apparatus using a plurality of sensors
CN101969548A (en) Active video acquiring method and device based on binocular camera shooting
CN107240124A (en) Across camera lens multi-object tracking method and device based on space-time restriction
CN106778540B (en) Parking detection is accurately based on the parking event detecting method of background double layer
CN102254394A (en) Antitheft monitoring method for poles and towers in power transmission line based on video difference analysis
Ali et al. Autonomous road surveillance system: A proposed model for vehicle detection and traffic signal control
CN105592301A (en) Image capturing apparatus, method of controlling the same, monitoring camera system
CN110458089A (en) A kind of naval target interconnected system and method based on the observation of height rail optical satellite
CN106022458A (en) People fast counting method for school bus safety
CN109977796A (en) Trail current detection method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20110406

Termination date: 20180330