CN101533548B - Device for protecting property based on omnibearing computer visual sense - Google Patents

Device for protecting property based on omnibearing computer visual sense Download PDF

Info

Publication number
CN101533548B
CN101533548B CN2009100975176A CN200910097517A CN101533548B CN 101533548 B CN101533548 B CN 101533548B CN 2009100975176 A CN2009100975176 A CN 2009100975176A CN 200910097517 A CN200910097517 A CN 200910097517A CN 101533548 B CN101533548 B CN 101533548B
Authority
CN
China
Prior art keywords
property
scene
moving object
unit
camera lens
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2009100975176A
Other languages
Chinese (zh)
Other versions
CN101533548A (en
Inventor
汤一平
庞成俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN2009100975176A priority Critical patent/CN101533548B/en
Publication of CN101533548A publication Critical patent/CN101533548A/en
Application granted granted Critical
Publication of CN101533548B publication Critical patent/CN101533548B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses a device for protecting the property based on the omnibearing computer visual sense, comprising a visual sensor for monitoring the condition of the property to be protected in the large-scale panorama and a microprocessor for protecting and detecting the property. The visual sensor is a omnibearing visual sensor and a high-speed fastball grasp shoot sensor, the microprocessor comprises a ODVS image acquiring module, an in-scene object detecting module for taking out the moving object, the special background object and the scene stationary object, a protected property detecting module used for detecting out the property losing event and the movable object, the time and the site of the event and comprising a virtual closed key monitoring region customizing unit, a detecting unit for the movable object nearing the virtual closed key monitoring region, a detecting unit for the moving-out of the special background object, an automatic generating unit for the protected property losing detecting ID number and a suspicious movable object grasp shooting unit. The invention has omnibearing visual sense, good reliability, convenient maintenance and high intelligent degree.

Description

Device for protecting property based on omnidirectional computer vision
Technical field
The invention belongs to the application aspect safeguarding of assets of optical technology and computer vision technique, specifically be applicable to the stolen checkout gear of property in the open environment on a large scale.
Background technology
In the exhibition center, museum, fair, market, Storage Site etc. need a kind of detection technique and protect valuable property safety; in case taking place, important property is removed incident; theft suspicion person's image can be reported to the police and capture to computer just immediately, so that provide effective clue for the quick detection of case.
Present safeguarding of assets means exist several of main problems: 1) because the visual range of camera head is limited, the property that detect than the large scene scope must adopt a plurality of camera heads; 2) detection means of lack of wisdomization still needs manually not dig-in person's monitor screen.
Summary of the invention
, poor reliability limited for the visual range that overcomes existing safeguarding of assets system, troublesome maintenance, the deficiency that intelligent degree is low the invention provides a kind of have omni-directional visual, good reliability, the device for protecting property based on omnidirectional computer vision that easy to maintenance, intelligent degree is high.
The technical solution adopted for the present invention to solve the technical problems is:
A kind of device for protecting property based on omnidirectional computer vision, comprise and be used to monitor the vision sensor of the interior protected property situation of panorama on a large scale, and be used for carrying out the microprocessor that safeguarding of assets detects according to the signal of vision sensor, described vision sensor is connected with described microprocessor, described vision sensor is that omnibearing vision sensor and high speed quick are captured transducer, described omnibearing vision sensor comprises support, transparent housing, a catadioptric mirror, secondary catadioptric mirror, shooting part camera lens and wide-angle lens, described transparent housing, catadioptric mirror and shooting part camera lens are rack-mount, a described catadioptric mirror is positioned at the top of described transparent housing, described transparent housing bottom center opening, described shooting part camera lens is positioned at the top of a described catadioptric mirror, described secondary catadioptric mirror is installed in the described central opening, aperture is opened at the middle part of described secondary catadioptric mirror, described wide-angle lens, described shooting part camera lens are installed in described aperture, a catadioptric mirror, the central shaft arrangement of secondary catadioptric mirror and wide-angle lens is on same axial line; With the focal length of shooting part camera lens as the focal length of f1, wide-angle lens as the distance of the focus of f2, shooting part camera lens and shooting part camera lens as S1, focal length from the shooting part camera lens to first imaging point as S2, distance from wide-angle lens to first imaging point as S3, the distance of point obtains following relational expression as S4 according to the imaging formula of camera lens from the wide-angle lens to the material object:
1 f 1 = 1 S 1 + 1 S 2 - - - ( 10 )
1 f 2 = 1 S 3 + 1 S 4 - - - ( 11 )
d=S2+S3 (12)
With between shooting part camera lens and the wide-angle lens apart from d as a constraints, by the design wide-angle lens focal length f2 satisfy the requirement of formula (12);
Shooting part camera lens and wide-angle lens are considered that as a compound lens its focal distance f is represented by following formula:
1 f = ( f 1 + f 2 - d ) f 1 * f 2 - - - ( 13 )
In addition, as D, its multiplication factor is represented by following formula with the diameter of process shot:
n = D f - - - ( 14 )
Process shot satisfies following formula:
n = D f = 2 θ 1 max - - - ( 15 )
In the formula (15), θ 1maxIt is the maximum angle of secondary reflection light V3 and catadioptric main shaft Z; Described microprocessor comprises:
The ODVS image collection module is used to obtain the full-view video image of monitoring scene, includes system initialization unit and image acquisition unit;
Obj ect detection module in the scene is used to extract temporary transient stationary objects of motion object, legacy and scene stationary objects in the monitoring scene, includes:
Long period background modeling unit is used for less turnover rate background pixel being carried out modeling based on mixture gaussian modelling by adopting;
Short period background modeling unit is used for bigger turnover rate background pixel being carried out modeling based on mixture gaussian modelling by adopting;
Shade suppresses the unit, be used for active zone componental movement object and motion shade, detection method based on the hsv color space, in the hsv color space, H representation in components colourity, S representation in components saturation, the brightness of V representation in components, by with making policy decision formula (20), judge whether certain pixel belongs to shade:
Figure G2009100975176D00032
In the formula, I V(x, y), I S(x, y), I H(x, y) and B V(x, y), B S(x, y), B H(x, and y) expression respectively (x, (x is y) with background pixel value H, S, V component y) to locate the new input value I of pixel; Parameter 0<α<β<1, the α value will be considered the intensity of shade, and when the shade that throws on the background was strong more, parameter alpha will obtain more little, and parameter beta is used for the robustness of enhanced system; Parameter τ SLess than zero, parameter τ HChoose then and mainly debug according to the situation of scene;
The object discrimination unit is used to extract motion object, special background object and scene stationary objects in the monitoring scene, with resulting processing intermediate object program B in described long period background modeling unit and the described short period background modeling unit LAnd B SDiffer from and subtract computing, obtain long period prospect F L, short period prospect F S, then according to F LAnd F SRelation between the value, four kinds of summarizing certain pixel in the present frame are dissimilar, result of determination as
Shown in the table 1:
F L(x,y) F S(x,y) Type of prediction
Class1
1 1 The motion object
Type
2 1 0 Temporary transient stationary objects
Type
3 0 1 Background object is moved out of, random noise
Type
4 0 0 The scene static object
Table 1
Construct a plausibility function L based on statistical information judge this pixel (whether x y) belongs to special background object, and it is defined as:
Figure G2009100975176D00041
In the formula, max eWith k all be positive number; Max eBe threshold value, when L (x, y)>max eThe time, this pixel just is judged as and belongs to special background object; Max eValue is the sensitivity that detects according to system and accuracy and definite; K has represented the rate of decay of plausibility function;
According to the judgement of table 1 in conjunction with plausibility function L, Class1 belongs to the motion object, and L (x, y)>max eCondition belong to special background object, type 4 belongs to the scene static object;
Protected property detection module is used to detect the property that takes place and loses incident and moving object, time, the place relevant with this incident in scene, include:
Virtual sealing key monitoring zone customization units is used for the zone at the protected property place of customizing virtual sealing; Moving object is near virtual sealing key monitoring zone detecting unit, be used to detect near virtual sealing key monitoring zone or the zone in the moving object of appearance such as human body, at first detect whether moving object is arranged, if the moving object of having detected, calculate the distance at this moving object and virtual sealing key monitoring region exterior edge again, if the distance threshold D that the distance value that calculates sets less than system just is judged to be moving object near virtual sealing key monitoring zone;
Special background object is moved out of detecting unit, is used to detect the disappearance of the special background object in virtual sealing key monitoring zone;
ID number automatic generation unit of protected property loss detection is used for generating protected property loss detection ID number automatically and depositing with protected property and lose the suspicious activity object close-up image file that is associated;
The suspicious activity object is captured the unit, is used for the protected property of the follow-up affirmation incident of losing and is done by whom, when detecting the zone of moving object near protected property place, detects protected property disappearance situation then again
During appearance, obtain moving object positional information (S from object discrimination cell processing result x, S y), then according to the mapping table of in described system initialization unit, being set up, export one and carrier's positional information (S x, S y) corresponding digital ID captures the control control port of transducer for the high speed quick; indication high speed quick is captured this position of transducer rotational alignment and is taken; obtain suspicious activity object close-up image, and it is kept in the file of ID number name of protected property loss detection.
Further, omnibearing vision sensor satisfies the indeformable requirement of full-view video image of whole monitoring field top view, designs in order to following method:
According to image-forming principle, the angle of incident ray V1 and catadioptric main shaft Z is Ф, and the angle of primary event light V2 and catadioptric main shaft Z is θ 2, cross P 1Point (t 1, F 1) tangent line and the angle of t axle be σ, the angle of normal and Z axle is ε; The angle of secondary reflection light V3 and catadioptric main shaft Z is θ 1, cross P 2Point (t 2, F 2) tangent line and the angle of t axle be σ, the angle of normal and Z axle is ε 1, can obtain formula (1) based on above-mentioned relation:
Figure G2009100975176D00051
Wherein
tan φ = t 1 F 1 ( t 1 - s ) , tan θ 2 = t 1 - t 2 F 2 - F 1 , tan θ 1 = t 2 F 2
In the formula, F 1Be a catadioptric minute surface curve, F 2It is secondary catadioptric minute surface curve;
Utilize triangle relation and simplify arrangement, obtain formula (2), (3):
F 12-2αF 1′-1=0 (2)
F 22-2βF 2′-1=0 (3)
In the following formula, σ = ( F 1 - s ) ( F 2 - F 1 ) - t 1 ( t 1 - t 2 ) t 1 ( F 2 - F 1 ) - ( t 1 - t 2 ) ( F 1 - s ) β = t 2 ( t 1 - t 2 ) + F 2 ( F 2 - F 1 ) t 2 ( F 2 - F 1 ) - F 2 ( t 1 - t 2 )
Solution formula (2), (3) can obtain formula (4), (5);
F 1 ′ = α ± α 2 + 1 - - - ( 4 )
F 2 ′ = β ± β 2 + 1 - - - ( 5 )
In the formula: F 1' be F 1The differential of curve, F 2' be F 2The differential of curve;
Between point on the imaging plane and the point on the horizontal plane, satisfy certain linear relationship, with the distance of viewpoint S be C and with the perpendicular horizontal plane L of Z axle on arbitrfary point P, the pixel p that a correspondence is arranged on imaging plane, with the coordinate polar coordinate representation on the horizontal plane, at this moment the arbitrfary point P (r on the horizontal plane L, z) represent with following formula
r=C*tanφ,z=s+C (6)
Have average resolution rate ODVS on the design level face, i.e. the indeformable ODVS of horizontal direction, the coordinate r of arbitrfary point P on horizontal plane L and the perpendicular direction of Z axle and pixel p and Z axle apart from t 2/ F 2(t 2) between to guarantee to have linear relationship; Make following formula to set up,
r=a*f*t 2/F 2(t 2)+b (7)
There is following relation to set up according to image-forming principle, incidence angle formula (8) expression,
tan φ = t 1 F 1 - s - - - ( 8 )
With formula (6), (8) substitution formula (7) and arrangement, obtain indeformable in the horizontal direction condition, with formula (9) expression,
t 2 = F 2 ( t 2 ) a * f ( t 1 F 1 ( t 1 ) - s - b ) - - - ( 9 )
The minute surface curve design of satisfying formula (9) meets the requirement of horizontal direction average resolution rate.
Further again, in the detecting unit of the approaching virtual sealing key monitoring of described moving object zone, the distance value decision method is that central point by moving object is as the center of circle, distance threshold D makes a circle as radius of circle, if this circle intersects with virtual sealing key monitoring zone then is judged to be moving object near virtual sealing key monitoring zone.
Further; in ID number automatic generation unit of described protected property loss detection; when the disappearance moment that detects the special background object in virtual sealing key monitoring zone; automatically can produce protected property loss detection ID number and generate a file with ID number name of this protected property loss detection simultaneously, the close-up image and the protected property that are used to deposit the suspicious activity object are lost the video image of incident.
In described system initialization unit, capture mapping relations between the transducer rotational angle information by setting up coordinate in the omnibearing vision sensor image and high speed quick, promptly full-view video image is divided into zonule one by one by preset point customization method; Set up a kind of mapping relations between the candid photograph point with each zonule and control high speed quick then; As long as to the digital ID of the correspondence of control port of high speed quick, the high speed quick just can automatically forward this zonule to and capture, and reads this mapping table during system initialization when capturing the close-up image of some zonules.
Technical conceive of the present invention is: the omnibearing vision sensor ODVS that developed recently gets up (OmniDirectional Vision Sensors) provide a kind of new solution for the panoramic picture that obtains scene in real time.The characteristics of ODVS are looking away (360 degree), can become piece image to the Information Compression in the hemisphere visual field, and the amount of information of piece image is bigger; When obtaining a scene image, the riding position of ODVS in scene is free more; ODVS is without run-home during monitoring environment; Algorithm is simpler during moving object in the detection and tracking monitoring range; Can obtain the realtime graphic of scene.This ODVS video camera mainly is made up of a ccd video camera and a reflective mirror that faces camera.Reflective mirror reflects the image in one week of horizontal direction to the ccd video camera imaging, like this, just can obtain the environmental information of 360 ° of horizontal directions in piece image.This omnidirectional vision camera has very outstanding advantage, under the real-time processing requirements to panorama, is a kind of quick, approach of visual information collection reliably especially.
A camera head that obtains real-time omnidirectional images is provided, set up a macrocyclic background model and a short-period background model respectively by modeling method based on the mixture gaussian modelling of two different update rates, then the real time panoramic image that is obtained is differed from macrocyclic background model, short-period background model respectively and subtract computing, then operation result is carried out judging based on the plausibility function of statistical information, obtain the result of determination of moving object, change of background object and stationary background object; Consider that then the property that the appearance of the moving object of video image and change of background object, time series that incident takes place detect in comprehensive scene is stolen; Capture the stolen suspicious person's of property close-up image at last by the high speed quick, determine that property is when, done by whom.
Based on the key problem of the device for protecting property of omnidirectional computer vision be to detect claimed property around moving object appears and incident that the property that then will protect disappears.Solution of the present invention is: at first, adopt the omnibearing vision sensor at no dead angle, its distinctive 360 ° visual field can solve locus that the incident of property disappearance occurs and the uncertain problem of time effectively in scene; Secondly, proposed a kind of modeling method of the mixture gaussian modelling based on two different update rates, can be effectively the people and the thing relevant with property disappearance incident of panoramic view have been detected; Once more, can't effectively distinguish the shortcoming of moving object and displacement shade, improve the robustness of checkout gear, introduce shade and suppressed the unit in order to overcome mixture gaussian modelling; At last, according to property disappearance event procedure, promptly there is human object to leave scene and make right judgement near property object, the disappearance of property object, the human object of protection.
Beneficial effect of the present invention mainly shows: 1, have omni-directional visual, good reliability, easy to maintenance, intelligent degree height; 2, be applied to the interior safeguarding of assets of open environments such as exhibition center, museum, fair, market, Storage Site.
Description of drawings
Fig. 1 is the structural representation of the omnibearing vision sensor at no dead angle;
Fig. 2 is the captured video image schematic diagram of omnibearing vision sensor;
Fig. 3 is the optical schematic diagram that shooting part camera lens and wide-angle lens make up;
The ODVS key diagram of Fig. 4 for designing by secondary catadioptric principle and horizontal direction average resolution rate;
The imaging plane projection theory figure of Fig. 5 for designing by horizontal direction average resolution rate;
Fig. 6 utilizes 4 rank Runge-Kutta algorithms to ask the catadioptric minute surface curve chart of the digital solution of F1 and F2;
Fig. 7 is the safeguarding of assets detection method schematic diagram based on omnidirectional computer vision;
Fig. 8 is the safeguarding of assets checkout gear hardware structure diagram based on omnidirectional computer vision;
Fig. 9 is for judging the flow chart of foreground object and background object;
Figure 10 is the information fusion schematic diagram between ODVS and the high speed quick;
Figure 11 is the safeguarding of assets checkout gear software block diagram based on omnidirectional computer vision.
Embodiment
Below in conjunction with accompanying drawing the present invention is further described.
Embodiment 1
With reference to Fig. 1~Figure 11, the key problem of safeguarding of assets Video Detection is to detect moving object occurring on every side and following the incident that protected property 14 disappears of protected property.Embodiment of the present invention are: at first, adopt the indeformable omnibearing vision sensor of no dead angle horizontal direction, its distinctive 360 ° visual field can solve locus that moving object occurs and the uncertain problem of time effectively in scene 15; Then in full-view video image, mark off the zone at protected property place, promptly define virtual sealing key monitoring zone; Secondly, proposed a kind of modeling method of the mixture gaussian modelling based on two different update rates, can be effectively the behavior of the moving object that enters panoramic view have been detected; Once more, can't effectively distinguish the shortcoming of moving object and displacement shade, introduce shade and suppressed the unit in order to overcome mixture gaussian modelling; Then to judge whether near this virtual sealing key monitoring zone, to have moving object to occur; And then judge whether the special background object in this virtual sealing key monitoring zone is moved out of, and device is reported to the police automatically when finding that above-mentioned situation takes place; Capture the moving object close-up image that shifts out special background object by the high speed quick at last, give a clue for solving a case rapidly; Accompanying drawing 7 has represented that moving object is near a kind of state in virtual sealing key monitoring zone in monitoring scene, and wherein monitoring scene 15, property 14, motion object 13.
As shown in Figure 8, device for protecting property based on omnidirectional computer vision includes omnibearing vision sensor, the high speed quick is captured transducer and computer, described omnibearing vision sensor is used to obtain the interior full-view video image of monitoring scene, described high speed quick is used to capture the close-up image of the moving object that shifts out special background object in the monitoring scene, and described computer is used for detecting and whether property has taken place in monitoring scene loses incident;
As shown in Figure 11, described computer includes obj ect detection module and protected property detection module in ODVS image collection module, the scene;
Described ODVS image collection module is used to obtain the full-view video image of monitoring scene, includes system initialization unit and image acquisition unit; In the described scene obj ect detection module be used for the panoramic video sequence image isolate moving object, special background object with and three different objects such as stationary objects, include long period background modeling unit, short period background modeling unit, shade inhibition unit and object discrimination unit; Described protected property detection module is used to detect property and loses incident and moving object, time, the place relevant with this incident, comprises that virtual sealing key monitoring zone customization units, moving object are moved out of detecting unit, ID number automatic generation unit of protected property loss detection and suspicious activity object and capture the unit near virtual sealing key monitoring zone detection module, special background object;
Described system initialization unit, be mainly used in the fusion of topography's information of panoramic video information that ODVS obtains and high speed quick, the fusion of these two kinds of information is to realize by setting up coordinate in the ODVS image and the mapping relations between the high speed quick rotational angle information, promptly by preset point customization method full-view video image is divided into zonule one by one, dividing method is seen accompanying drawing 10; Set up a kind of mapping relations between the candid photograph point with each zonule and control high speed quick then; As long as to the digital ID of the correspondence of control port of high speed quick, the high speed quick just can automatically forward this zonule to and capture when capturing the close-up image of some zonules like this; These functions are to realize by the mapping table between an ODVS and the high speed quick, read this mapping table during system initialization, also can allow the user to rebulid new mapping table simultaneously; In the system initialization unit, also need to read in the data in institute customizing virtual sealing key monitoring zone in the customization units of described virtual sealing key monitoring zone;
Described image acquisition unit is mainly used in the full-view video image that reads ODVS by video card or other video interfaces; The full-view video image that is obtained is exported to the interior obj ect detection module of scene and is handled;
Described long period background modeling unit, be used for less turnover rate background pixel being carried out modeling based on mixture gaussian modelling by adopting, be mixed Gaussian background modeling algorithm, can both be detected as foreground object by the computing moving object of this algorithm and temporarily static object;
Described short period background modeling unit, be used for bigger turnover rate background pixel being carried out modeling based on mixture gaussian modelling by adopting, be mixed Gaussian background modeling algorithm, can both be detected as background object by temporary transient static object and the scene static object of the computing of this algorithm;
Described mixed Gaussian background modeling algorithm, its main thought are each pixel to be adopted the hybrid representation of a plurality of Gauss models.If be used for describing total K of each Gaussian Profile of putting distribution of color, to the time dependent sequence { Y of each pixel 1, Y 2...., Y n, come to be its modeling with mixed Gauss model.Define the new probability formula of the pixel value of current observation station, as shown in Equation (16):
P ( Y t ) = Σ n k w n , t × η ( Y t , μ n , t , σ n , t 2 ) - - - ( 16 )
In the formula: t is a time point; K is the quantity (generally getting 3~5) of Gauss model; ω N, tBe the weights of n Gauss model; μ N, tAnd σ N, t 2Be respectively average and variance at t moment n Gauss model; Wherein probability density function is as shown in Equation 17:
η ( Y t , μ n , t , σ n , t 2 ) = 1 2 π e - ( Y t - μ ) 2 σ 2 - - - ( 17 )
Each Gaussian Profile has different weights and priority respectively, and they are always according to priority order ordering from high to low.Get suitable background weights threshold interval, only preceding several within this threshold value distribute and just are considered to background distributions, and other then is that prospect distributes.So, when detecting the foreground point, according to priority order with pixel Y tMate one by one with each Gaussian Profile, if do not represent the Gaussian Profile and the Y of background distributions tCoupling is judged that then this point is the foreground point, otherwise is background dot;
Described less turnover rate and bigger turnover rate, be meant the update coefficients α of formula (18), less turnover rate is meant that update coefficients α has less value, and bigger turnover rate is meant that update coefficients α has bigger value, the span of update coefficients: 0<α<1
μ = ( 1 - α ) × μ + α × Y σ 2 = ( 1 - α ) × σ 2 + α × ( Y - μ ) 2 - - - ( 18 )
In the formula, μ and σ 2Be respectively the average and the variance of Gauss model, α is a update coefficients, and Y is a pixel.
Described shade suppresses the unit, be used for effectively distinguishing moving object and motion shade, here introduced shade inhibition unit in order to improve the Device Testing precision, the shadow Detection algorithm is combined with mixture gaussian modelling, for the pixel value It that newly enters, judge that according to mixed Gaussian background distributions model this pixel is background or sport foreground, formula judges whether it belongs to the motion shade if prospect then further adopts the shade decision-making.Particular flow sheet as shown in Figure 9.
Described shadow Detection algorithm adopts the detection method based on the hsv color space, and the hsv color space more can detect shade near the mode of human visual perception color exactly than RGB color space.In the hsv color space, H representation in components colourity, S representation in components saturation, the brightness of V representation in components.Because the brightness and the saturation of shaded side are dark partially than background, and colourity is consistent substantially, so can judge whether certain pixel belongs to shade by with making policy decision formula (20).
Figure G2009100975176D00122
In the formula, I V(x, y), I S(x, y), I H(x, y) and B V(x, y), B S(x, y), B H(x, (x, (x is y) with background pixel value H, S, V component y) to locate the new input value I of pixel in y) expression respectively.Parameter 0<α<β<1, the α value will be considered the intensity of shade, and when the shade that throws on the background was strong more, parameter alpha will obtain more little, and parameter beta is used for the robustness of enhanced system.Parameter τ SLess than zero, parameter τ HChoose then and mainly debug according to the situation of scene.
Described object discrimination unit is used to extract moving object, special background object and scene stationary objects in the monitoring scene; At first be to read full-view video image, then respectively with described long period background modeling unit and described short period background modeling unit in resulting processing intermediate object program B LAnd B SDiffer from and subtract computing, obtain long period prospect (F L, Foreground Long), short period prospect (F S, Foreground Short), then according to F LAnd F SRelation between the value, four kinds of summarizing interior certain pixel of present frame are dissimilar, and result of determination is as shown in table 1:
Table 1 is according to F LAnd F SValue is judged the type of certain pixel
F L(x,y) F S(x,y) Type of prediction
Class1
1 1 The motion object
Type
2 1 0 Temporary transient stationary objects
Type
3 0 1 Background object is moved out of, random noise
Type 4 0 0 The scene static object
Class1:, under any background model turnover rate, all be to belong to prospect so if this pixel belongs to moving object;
Type 2: if this pixel is to transfer inactive state to from motion state, if turnover rate is bigger, it will be regarded as background, and on the contrary, under the less background image of turnover rate, it belongs to prospect;
Type 3: if this pixel is to be subjected to that outside noise disturbs or variation has taken place background, the background image that turnover rate is less is owing to can't accomplish real-time update, still judge that this pixel is a background, the background image that turnover rate is bigger then is judged to be prospect, and it is exactly to belong to the type that background object is moved out of;
Type 4:, under the background image of different update rate, all will be regarded as background so if certain pixel is a part that belongs to static scene.
From table 1, can know,, promptly satisfy F only in type 3 L(x, y)=0 ∧ F S(x, y)=1 the pixel of condition just might be to belong to certain special background object.Detect the purpose of robustness for enhanced system, the present invention by construct a plausibility function L based on statistical information judge this pixel (whether x y) belongs to special background object, and it is defined as:
Figure G2009100975176D00141
In the formula, max eWith k all be positive number.This function not only can be subdued the noise in the testing process, and can control certain pixel and be judged as the decision-making time that belongs to special background object.For this pixel, plausibility function will obtain go forward side by side line correlation statistics of its type information in real time.Max eBe threshold value, when L (x, y)>max eThe time, this pixel just is judged as and belongs to special background object; Max eValue is the sensitivity that detects according to system and accuracy and definite.If system requirements is avoided noise as far as possible, should adopt bigger max so eValue to be guaranteeing accuracy of detection, but meanwhile, bigger max eValue unavoidably will cause the increase of decision-making time.K has represented the rate of decay of plausibility function.
By above-mentioned calculating, from the full-view video image sequence, isolate moving object, special background object with and three different objects such as stationary objects, that protected property is lost the erroneous judgement of incident is disconnected in order to reduce, and improves the accuracy that protected property is reported to the police; At first in full-view video image, mark off the zone at protected property place, promptly define virtual sealing key monitoring zone; Then to judge whether near this virtual sealing key monitoring zone, to have moving object to occur; And then judge whether the special background object in this virtual sealing key monitoring zone is moved out of, and device is reported to the police automatically when finding that above-mentioned situation takes place; Capture the moving object close-up image that shifts out special background object by the high speed quick at last, give a clue for solving a case rapidly; Adopt protected property detection module to detect among the present invention;
Described virtual sealing key monitoring zone customization units is used for the zone at the protected property place of customizing virtual sealing, according to the distribution situation of the protected property of reality, can customize several virtual sealing key monitoring zones; Adopt mouse to draw according to protected property location in space in panoramic picture among the present invention and contain several virtual sealing key monitoring zones of protected property, shown in the dotted portion in the accompanying drawing 7;
Described moving object is near virtual sealing key monitoring zone detection module, is used to detect near virtual sealing key monitoring zone or the moving object of appearance such as human body in the zone;
Described special background object is moved out of detecting unit, is used to detect the disappearance of the special background object in virtual sealing key monitoring zone;
ID number automatic generation unit of described protected property loss detection, be used for generating protected property loss detection ID number automatically and depositing with protected property and lose the suspicious activity object close-up image file that is associated, protected property loss detection includes protected property for ID number and loses the temporal information that incident takes place;
Described suspicious activity object is captured the unit, is used for the protected property of the follow-up affirmation incident of losing and is done by whom, gives a clue for solving a case rapidly; When device detects moving object near the zone at protected property place, when detecting protected property disappearance situation then again and occurring, obtain moving object positional information (S from described object discrimination cell processing result x, S y), then according to the mapping table of in described system initialization unit, being set up, export one and carrier's positional information (S x, S y) corresponding digital ID gives the control control port of high speed quick, this position of indication high speed quick rotational alignment is taken, and obtains suspicious activity object close-up image, and it is kept in the file of ID number name of protected property loss detection;
Described omnibearing vision sensor, when be used for obtaining effectively protected property and be, where, by the complete full-view video image that who do, need not have the design of the indeformable omnibearing vision sensor of dead angle horizontal direction;
Described horizontal direction is indeformable, need carry out the design of horizontal direction average resolution rate, to satisfy the indeformable requirement of full-view video image of whole monitoring field top view; Can ascribe the design of catadioptric minute surface curve in the ODVS design to, as shown in Figure 4, the incident light V1 of a light source point P on the space is at principal reflection minute surface (t1, F 1) reflect on the point, reverberation V2 reflexes to secondary reflection minute surface (t2, F 2) reflect again on the point, reverberation V3 goes up imaging with the camera lens that angle θ 1 enters camera head at image unit (CCD or CMOS).
According to image-forming principle, the angle of incident ray V1 and catadioptric main shaft Z is Ф, and the angle of primary event light V2 and catadioptric main shaft Z is θ 2, cross P 1Point (t 1, F 1) tangent line and the angle of t axle be σ, the angle of normal and Z axle is ε; The angle of secondary reflection light V3 and catadioptric main shaft Z is θ 1, cross P 2Point (t 2, F 2) tangent line and the angle of t axle be σ, the angle of normal and Z axle is ε 1, can obtain formula (1) based on above-mentioned relation:
Figure G2009100975176D00161
Wherein tan φ = t 1 F 1 ( t 1 - s ) , tan θ 2 = t 1 - t 2 F 2 - F 1 , tan θ 1 = t 2 F 2
In the formula, F 1Be a catadioptric minute surface curve, F 2It is secondary catadioptric minute surface curve;
Utilize triangle relation and simplify arrangement, obtain formula (2), (3):
F 12-2αF 1′-1=0 (2)
F 22-2βF 2′-1=0 (3)
In the following formula, σ = ( F 1 - s ) ( F 2 - F 1 ) - t 1 ( t 1 - t 2 ) t 1 ( F 2 - F 1 ) - ( t 1 - t 2 ) ( F 1 - s ) β = t 2 ( t 1 - t 2 ) + F 2 ( F 2 - F 1 ) t 2 ( F 2 - F 1 ) - F 2 ( t 1 - t 2 )
Solution formula (2), (3) can obtain formula (4), (5);
F 1 ′ = α ± α 2 + 1 - - - ( 4 )
F 2 ′ = β ± β 2 + 1 - - - ( 5 )
In the formula: F 1' be F 1The differential of curve, F 2' be F 2The differential of curve;
Point on the described imaging plane and the relation between the point on the horizontal plane have certain linear relationship, with the distance of viewpoint S be C and with the perpendicular horizontal plane L of Z axle on arbitrfary point P, the pixel p that a correspondence is arranged on imaging plane, as shown in Figure 4, with the coordinate polar coordinate representation on the horizontal plane, at this moment (r z) can represent with following formula the arbitrfary point P on the horizontal plane L
r=C*tanφ,z=s+C (6)
In order to have average resolution rate ODVS on the design level face, i.e. the indeformable ODVS of horizontal direction, the coordinate r of arbitrfary point P on horizontal plane L and the perpendicular direction of Z axle and pixel p and Z axle apart from t 2/ F 2(t 2) between to guarantee to have linear relationship.Make following formula to set up,
(7)
r=a*f*t 2/F 2(t 2)+b
There is following relation to set up according to image-forming principle, incidence angle formula (8) expression,
tan φ = t 1 F 1 - s - - - ( 8 )
With formula (6), (8) substitution formula (7) and arrangement, obtain indeformable in the horizontal direction condition, with formula (9) expression,
t 2 = F 2 ( t 2 ) a * f ( t 1 F 1 ( t 1 ) - s - b ) - - - ( 9 )
The minute surface curve design of satisfying formula (9) meets the requirement of horizontal direction average resolution rate;
Further, by formula (2), (3), (9) being utilized 4 rank Runge-Kutta algorithms ask F 1And F 2Digital solution, the catadioptric minute surface and the secondary catadioptric minute surface curve that calculate like this can be realized horizontal direction average resolution rate; Fig. 6 utilizes 4 rank Runge-Kutta algorithms to ask F 1And F 2The catadioptric minute surface curve chart of digital solution;
Further, described omnibearing vision sensor comprises support, transparent housing 2, a catadioptric mirror 4, secondary catadioptric mirror 5, shooting part camera lens 3 and wide-angle lens 6, described transparent housing, catadioptric mirror and shooting part camera lens are rack-mount, a described catadioptric mirror is positioned at the top of described transparent housing, described transparent housing bottom center opening, described shooting part camera lens is positioned at the top of a described catadioptric mirror, described secondary catadioptric mirror is installed in the described central opening, aperture is opened at the middle part of described secondary catadioptric mirror, described wide-angle lens, described shooting part camera lens 3 are installed in described aperture, a catadioptric mirror 4, the central shaft arrangement of secondary catadioptric mirror 5 and wide-angle lens 6 is on same axial line.Design transparent housing 2 is in order to make transparent housing 2 can not produce the reflection interference light of inwall, as shown in Figure 1.Specific practice is transparent housing to be designed to bowl-shape, promptly is designed to semi-round ball, can avoid like this at transparent housing 2 the reflection interference light taking place, and the structure of ODVS as shown in Figure 1;
Described no dead angle ODVS, be used for overcoming that original ODVS is blocked by secondary catadioptric minute surface and the dead angle of causing, in design, among the present invention wide-angle lens is configured on the secondary catadioptric minute surface, the position of design wide-angle lens and definite wide-angle lens is a task of the present invention.Fig. 3 is the location diagram of shooting part camera lens and wide-angle lens.In Fig. 3 wide-angle lens is configured on the place ahead and secondary catadioptric minute surface of a catadioptric mirror, the central shaft arrangement of shooting part camera lens, wide-angle lens, catadioptric mirror and secondary catadioptric mirror is on same axial line; Circular hole imaging between wide-angle lens and shooting part camera lens by on catadioptric mirror is called first imaging point, this imaging point by the shooting part camera lens in the imaging of viewpoint place.Here with the focal length of shooting part camera lens as the focal length of f1, wide-angle lens as the distance of the focus of f2, shooting part camera lens and shooting part camera lens as S1, focal length from the shooting part camera lens to first imaging point as S2, distance from wide-angle lens to first imaging point as S3, the distance of point can obtain following relational expression as S4 according to the imaging formula of camera lens from the wide-angle lens to the material object:
1 f 1 = 1 S 1 + 1 S 2 - - - ( 10 )
1 f 2 = 1 S 3 + 1 S 4 - - - ( 11 )
d=S2+S3 (12)
Formula (12) is set up, and is the place configuration wide-angle lens of d with the shooting part distance of camera lens behind the first catadioptric minute surface among Fig. 3 just, just can obtain the shown wide-angle image in image middle part among Fig. 2; But be that wide-angle lens is configured on the second catadioptric minute surface among the present invention, therefore with between shooting part camera lens and the wide-angle lens apart from d as a constraints, have only focal length f2 to satisfy the requirement of formula (12) by designing wide-angle lens;
Further, among Fig. 3 shooting part camera lens and wide-angle lens being considered that as a compound lens its focal distance f can be represented by following formula:
1 f = ( f 1 + f 2 - d ) f 1 * f 2 - - - ( 13 )
In addition, as D, its multiplication factor can be represented by following formula with the diameter of process shot:
n = D f - - - ( 14 )
For the visual field of process shot and the dead angle part of ODVS are matched, when the design process shot, need satisfy following formula:
n = D f = 2 θ 1 max - - - ( 15 )
In the formula, θ 1maxIt is the maximum angle of secondary reflection light V3 and catadioptric main shaft Z; The image effect figure that the ODVS of the above-mentioned design of process takes out as shown in Figure 2, from single ODVS, eliminated the dead angle part of original ODVS, and add the design of the first catadioptric minute surface and the second catadioptric minute surface by the compound mode of shooting part camera lens and wide-angle lens, can cover the dead angle part of original ODVS effectively.
Further, the aperture on the described first catadioptric minute surface, the first catadioptric minute surface, video camera, transparent housing, the second catadioptric minute surface, wide-angle lens are on same central axis; The camera lens of video camera is placed on the viewpoint position at the first catadioptric minute surface rear portion, as shown in Figure 1;
Described transparent housing is mainly used in and supports the first catadioptric minute surface, the second catadioptric minute surface, wide-angle lens and protect the first catadioptric minute surface and the second catadioptric minute surface is not subjected to the pollution of extraneous dust and influences catadioptric quality.
Embodiment 2
Sometimes in order to save system cost, whether monitoring scene in have protected property lose incident, can dispense the high speed quick on hardware if only detecting, omit the correlation module of capturing on software, and all the other are all identical in embodiment 1.
The invention effect that the above embodiments 1 and embodiment 2 are produced is to detect by omnidirectional computer vision to make that the scope of safety monitoring is broader, provide a kind of brand-new, maintenance cost is low, easy to maintenance, judge more reliable, visual, intelligentized safeguarding of assets monitoring approach and means and device.Can be applicable to the safeguarding of assets in the open environments such as exhibition center, museum, fair, market, Storage Site.

Claims (4)

1. device for protecting property based on omnidirectional computer vision, comprise and be used to monitor the vision sensor of the interior protected property situation of panorama on a large scale, it is characterized in that: described device for protecting property comprises that also the signal that is used for according to vision sensor carries out the microprocessor that safeguarding of assets detects, described vision sensor is connected with described microprocessor, described vision sensor is that omnibearing vision sensor and high speed quick are captured transducer, described omnibearing vision sensor comprises support, transparent housing, a catadioptric mirror, secondary catadioptric mirror, shooting part camera lens and wide-angle lens, described transparent housing, catadioptric mirror and shooting part camera lens are rack-mount, a described catadioptric mirror is positioned at the top of described transparent housing, described transparent housing bottom center opening, described shooting part camera lens is positioned at the top of a described catadioptric mirror, described secondary catadioptric mirror is installed in the described central opening, aperture is opened at the middle part of described secondary catadioptric mirror, described wide-angle lens, described shooting part camera lens are installed in described aperture, a catadioptric mirror, the central shaft arrangement of secondary catadioptric mirror and wide-angle lens is on same axial line; With the focal length of shooting part camera lens as the focal length of f1, wide-angle lens as the distance of the focus of f2, shooting part camera lens and shooting part camera lens as S1, focal length from the shooting part camera lens to first imaging point as S2, distance from wide-angle lens to first imaging point as S3, the distance of point obtains following relational expression as S4 according to the imaging formula of camera lens from the wide-angle lens to the material object:
1 f 1 = 1 S 1 + 1 S 2 - - - ( 10 )
1 f 2 = 1 S 3 + 1 S 4 - - - ( 11 )
d=S2+S3 (12)
With between shooting part camera lens and the wide-angle lens apart from d as a constraints, by the design wide-angle lens focal length f2 satisfy the requirement of formula (12);
Shooting part camera lens and wide-angle lens are considered that as a compound lens its focal distance f is represented by following formula:
1 f = ( f 1 + f 2 - d ) f 1 * f 2 - - - ( 13 )
In addition, as D, its multiplication factor is represented by following formula with the diameter of compound lens:
n = D f - - - ( 14 )
Compound lens satisfies following formula:
n = D f = 2 θ 1 max - - - ( 15 )
In the formula (15), θ 1maxIt is the maximum angle of secondary reflection light V3 and catadioptric main shaft Z;
Described microprocessor comprises:
The ODVS image collection module is used to obtain the full-view video image of monitoring scene, includes system initialization unit and image acquisition unit;
Obj ect detection module in the scene is used to extract temporary transient stationary objects of motion object, legacy and scene stationary objects in the monitoring scene, includes:
Long period background modeling unit is used for less turnover rate background pixel being carried out modeling based on mixture gaussian modelling by adopting;
Short period background modeling unit is used for bigger turnover rate background pixel being carried out modeling based on mixture gaussian modelling by adopting;
Shade suppresses the unit, be used for active zone componental movement object and motion shade, detection method based on the hsv color space, in the hsv color space, H representation in components colourity, S representation in components saturation, the brightness of V representation in components, by with making policy decision formula (20), judge whether certain pixel belongs to shade:
Figure FSB00000538235900024
In the formula, I V(x, y), I S(x, y), I H(x, y) and B V(x, y), B S(x, y), B H(x, and y) expression respectively (x, (x is y) with background pixel value H, S, V component y) to locate the new input value I of pixel; Parameter 0<α<β<1, the α value will be considered the intensity of shade, and when the shade that throws on the background was strong more, parameter alpha will obtain more little, and parameter beta is used for the robustness of enhanced system; Parameter τ SLess than zero, parameter τ HChoose then and mainly debug according to the situation of scene;
The object discrimination unit is used to extract motion object, special background object and scene stationary objects in the monitoring scene, with resulting processing intermediate object program long period background B in described long period background modeling module and the described short period background modeling module LWith short period background B SDiffer from and subtract computing, obtain long period prospect F L, short period prospect F S, then according to F LAnd F SRelation between the value, four kinds of summarizing interior certain pixel of present frame are dissimilar, and result of determination is as shown in table 1:
F L(x,y) F S(x,y) Type of prediction Class1 1 1 The motion object Type 2 1 0 Temporary transient stationary objects Type 3 0 1 Background object is moved out of, random noise Type 4 0 0 The scene static object
Table 1
Construct a plausibility function L based on statistical information judge this pixel (whether x y) belongs to special background object, and it is defined as:
Figure FSB00000538235900031
In the formula, max eWith k all be positive number; Max eBe threshold value, when L (x, y)>max eThe time, this pixel just is judged as and belongs to special background object; Max eValue is the sensitivity that detects according to system and accuracy and definite; K has represented the rate of decay of plausibility function;
According to the judgement of table 1 in conjunction with plausibility function L, Class1 belongs to the motion object, and L (x, y)>max eCondition belong to special background object, type 4 belongs to the scene static object;
Protected property detection module is used to detect the property that takes place and loses incident and moving object, time, the place relevant with this incident in scene, include:
Virtual sealing key monitoring zone customization units is used for the zone at the protected property place of customizing virtual sealing; Moving object is near virtual sealing key monitoring zone detecting unit, be used for detecting near virtual sealing key monitoring zone or moving object occur in the zone, at first detect whether moving object is arranged, if the moving object of having detected, calculate the distance at this moving object and virtual sealing key monitoring region exterior edge again, if the distance threshold D that the distance value that calculates sets less than system just is judged to be moving object near virtual sealing key monitoring zone;
Special background object is moved out of detecting unit, is used to detect the disappearance of the special background object in virtual sealing key monitoring zone;
ID number automatic generation unit of protected property loss detection is used for generating protected property loss detection ID number automatically and depositing with protected property and lose the suspicious activity object close-up image file that is associated;
The suspicious activity object is captured the unit; being used for the protected property of the follow-up affirmation incident of losing is done by whom; when detecting the zone of moving object near protected property place; when detecting protected property disappearance situation then again and occurring, obtain moving object positional information (S from object discrimination cell processing result x, S y), then according to the mapping table of in described system initialization unit, being set up, export one and moving object positional information (S x, S y) corresponding digital ID captures the control port of transducer for the high speed quick; indication high speed quick is captured this position of transducer rotational alignment and is taken; obtain suspicious activity object close-up image, and it is kept in the file of ID number name of protected property loss detection.
2. the device for protecting property based on omnidirectional computer vision as claimed in claim 1; it is characterized in that: in the detecting unit of the approaching virtual sealing key monitoring of described moving object zone; the distance value decision method is that central point by moving object is as the center of circle; distance threshold D makes a circle as radius of circle, if this circle intersects with virtual sealing key monitoring zone then is judged to be moving object near virtual sealing key monitoring zone.
3. the device for protecting property based on omnidirectional computer vision as claimed in claim 2; it is characterized in that: in ID number automatic generation unit of described protected property loss detection; when the disappearance moment that detects the special background object in virtual sealing key monitoring zone; automatically can produce protected property loss detection ID number and generate a file with ID number name of this protected property loss detection simultaneously, the close-up image and the protected property that are used to deposit the suspicious activity object are lost the video image of incident.
4. the device for protecting property based on omnidirectional computer vision as claimed in claim 3, it is characterized in that: in described system initialization unit, capture mapping relations between the transducer rotational angle information by setting up coordinate in the omnibearing vision sensor image and high speed quick, promptly full-view video image is divided into zonule one by one by preset point customization method; Set up a kind of mapping relations between the candid photograph point with each zonule and control high speed quick then; As long as to the digital ID of the correspondence of control port of high speed quick, the high speed quick just can automatically forward this zonule to and capture, and reads the mapping table of being set up during system initialization when capturing the close-up image of some zonules.
CN2009100975176A 2009-04-03 2009-04-03 Device for protecting property based on omnibearing computer visual sense Expired - Fee Related CN101533548B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009100975176A CN101533548B (en) 2009-04-03 2009-04-03 Device for protecting property based on omnibearing computer visual sense

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009100975176A CN101533548B (en) 2009-04-03 2009-04-03 Device for protecting property based on omnibearing computer visual sense

Publications (2)

Publication Number Publication Date
CN101533548A CN101533548A (en) 2009-09-16
CN101533548B true CN101533548B (en) 2011-09-14

Family

ID=41104121

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009100975176A Expired - Fee Related CN101533548B (en) 2009-04-03 2009-04-03 Device for protecting property based on omnibearing computer visual sense

Country Status (1)

Country Link
CN (1) CN101533548B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101873477B (en) * 2010-02-26 2012-09-05 杭州海康威视数字技术股份有限公司 Method and device for realizing monitoring by fast ball tracking system
CN102164270A (en) * 2011-01-24 2011-08-24 浙江工业大学 Intelligent video monitoring method and system capable of exploring abnormal events
EP2932708B1 (en) * 2013-05-28 2017-07-19 Hewlett-Packard Enterprise Development LP Mobile augmented reality for managing enclosed areas
CN103747217A (en) * 2014-01-26 2014-04-23 国家电网公司 Video monitoring method and device
CN104933816B (en) * 2014-03-17 2017-08-11 南充鑫源通讯技术有限公司 The distance of reaction method to set up and device of a kind of automatic sensing safety-protection system
CN104298988B (en) * 2014-08-21 2017-08-25 华南理工大学 A kind of property guard method matched based on video image local feature
CN104539896B (en) * 2014-12-25 2018-07-10 桂林远望智能通信科技有限公司 The intelligent monitor system and method for a kind of overall view monitoring and hot spot feature
CN110264522B (en) * 2019-06-24 2021-07-13 北京百度网讯科技有限公司 Article operator detection method, apparatus, device and storage medium
CN112312068B (en) * 2019-07-31 2022-04-15 杭州海康威视数字技术股份有限公司 Target detection method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN101533548A (en) 2009-09-16

Similar Documents

Publication Publication Date Title
CN101533548B (en) Device for protecting property based on omnibearing computer visual sense
CN101552910B (en) Remnant detection device based on comprehensive computer vision
US8532427B2 (en) System and method for image enhancement
CN104935893B (en) Monitor method and apparatus
CN101542232B (en) Normal information generating device and normal information generating method
US8170278B2 (en) System and method for detecting and tracking an object of interest in spatio-temporal space
CN103017730B (en) Single-camera ranging method and single-camera ranging system
US20180211398A1 (en) System for 3d image filtering
CN107635129B (en) Three-dimensional trinocular camera device and depth fusion method
CN102833478B (en) Fault-tolerant background model
CN107240124A (en) Across camera lens multi-object tracking method and device based on space-time restriction
CN101276499B (en) Intelligent monitoring apparatus of ATM equipment based on all-directional computer vision
CN104966062B (en) Video monitoring method and device
CN104902246A (en) Video monitoring method and device
CN101834986A (en) Imaging device, mobile body detecting method, mobile body detecting circuit and program
CN105787876A (en) Panorama video automatic stitching method based on SURF feature tracking matching
CN106600628A (en) Target object identification method and device based on infrared thermal imaging system
CN111382610B (en) Event detection method and device and electronic equipment
CN101777223B (en) Financial self-service terminal and control method of safety zone thereof
Gabaldon et al. A framework for enhanced localization of marine mammals using auto-detected video and wearable sensor data fusion
Xia et al. SCSS: An Intelligent Security System to Guard City Public Safe
CN106997685A (en) A kind of roadside parking space detection device based on microcomputerized visual
CN113468985A (en) Method for locking suspicious radiation source carrying personnel
CN111985331A (en) Detection method and device for preventing secret of business from being stolen
CN202472608U (en) Signal receiver of electronic whiteboard with wide angle image detection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20110914

Termination date: 20150403

EXPY Termination of patent right or utility model