GB2439184A - Obstacle detection in a surveillance system - Google Patents

Obstacle detection in a surveillance system Download PDF

Info

Publication number
GB2439184A
GB2439184A GB0711019A GB0711019A GB2439184A GB 2439184 A GB2439184 A GB 2439184A GB 0711019 A GB0711019 A GB 0711019A GB 0711019 A GB0711019 A GB 0711019A GB 2439184 A GB2439184 A GB 2439184A
Authority
GB
United Kingdom
Prior art keywords
surveillance
size
data records
status data
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB0711019A
Other versions
GB2439184B (en
GB0711019D0 (en
Inventor
Thomas Jaeger
Marcel Merkel
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Robert Bosch GmbH
Original Assignee
Robert Bosch GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Bosch GmbH filed Critical Robert Bosch GmbH
Publication of GB0711019D0 publication Critical patent/GB0711019D0/en
Publication of GB2439184A publication Critical patent/GB2439184A/en
Application granted granted Critical
Publication of GB2439184B publication Critical patent/GB2439184B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06K9/00369
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

An image processing method is proposed for the recognition and processing of visual obstacles such as desks or cabinets 7, 8 in a surveillance scene 6 in an image sequence, wherein a plurality of data records of a surveillance object are collected, which include in each case an object position x, y and a size h of the surveillance object measured at the object position in an image of the image sequence, wherein by comparing the measured size h of the surveillance object to a modeled perspective size of the surveillance object at the same object position y, one of the visual obstacles 7, 8 can be detected and/or verified. Data is used to plot a histogram in which a straight line or plane may be identified to show the normal perspective change and subsidiary lines denote an object obscured by an obstacle. A scene model may be created form the data and the method may be used for tracking objects.

Description

<p>Description Title</p>
<p>Image processing method, video surveillance system, and computer program</p>
<p>Prior art</p>
<p>The present invention relates to an image processing method for the recognition and processing of visual obstacles in a surveillance scene in an image sequence, in which a plurality of status data records of a surveillance object are collected, which include in each case an object position and a size of the surveillance object measured at the object position in an image of the image sequence, as well as a video surveillance system for carrying out the image processing method and a computer program with program code means for carrying out the image processing method.</p>
<p>Video surveillance systems are employed in a large number of applications, for example in the surveillance of open spaces, traffic intersections, but also in buildings, such as for example museums, schools, universities, prisons and factories. In this connection the video surveillance systems generally comprise a plurality of surveillance cameras, whose observation field is targeted on relevant regions, and a central evaluation unit, in which the recorded video material is collected. In this evaluation unit the video material is optionally stored or evaluated.</p>
<p>For the evaluation surveillance staff are often employed in order to monitor the incoming image sequences on-line or in real time. However, it is known that the attentiveness and concentration of the surveillance staff can decrease after some time on account of tiredness.</p>
<p>For this reason image processing algorithms are also nowadays used in conventional video surveillance systems, in order to evaluate in an automated manner the large amounts of video material that are recorded by surveillance cameras. Often for this purpose moving objects are separated out from the basically static background and are monitored over time, and alarms are triggered if for example relevant movements are observed. The first -also termed object segmentation -step is carried out by evaluating image differences between an actual camera image and a so-called scene reference image or scene model, which</p>
<p>models the static scene background.</p>
<p>A large number of individual problems arise in the object segmentation as well as in the so-called object tracking connected therewith, which are discussed in summary form for example in the article by K. Toyama, J. Krumm, B. Brumitt, B. Meyers: "Wallflower: Principles and Practice of Background Maintenance" ICCV, Corfu, Greece. The aforementioned problems concern for example the</p>
<p>displacement of background objects, changes in</p>
<p>illumination, quasi-stationary backgrounds, such as for example moving trees, camouflage, lack of training material, shadow effects, etc. -</p>
<p>Disclosure of the invention</p>
<p>According to the invention an image processing method is proposed for the recognition and processing of visual obstacles, with the features of claim 1; a video surveillance system with the features of claim 11, as well as a computer program with the features of claim 13.</p>
<p>Preferred or advantageous modifications are disclosed in the sub-claims, in the following description and in the accompanying drawings.</p>
<p>The method according to the invention is designed for the recognition and processing of visual obstacles in an image sequence of a surveillance scene. In this connection a method of image processing is used that is preferably based on digital image processing.</p>
<p>Visual obstacles are understood to mean preferably those obstacles in a surveillance scene that can be associated in particular on account of their stationary or quasi-stationary behaviour with the static image background or with the scene model, and/or at least partially obscure a surveillance object in some object positions of the surveillance object in the surveillance scene.</p>
<p>The visual obstacles cause the size of the surveillance object measured in the image to change abruptly during a tracking procedure, and as a result the tracking is made considerably more difficult. This situation occurs for example if the surveillance scene is not an unobstructed area, but is a region containing semi-high obstacles, such as for example low walls, parked cars, tables, cupboards, cabinets, etc., since the surveillance objects moving in the surveillance zone are only partly visible behind these visual obstacles.</p>
<p>By recognising and processing such visual obstacles there is the possibility of obtaining a knowledge base that can predict at which sites surveillance objects are expected to be only partly visible. By using this additional knowledge the tracking procedure can be significantly improved and/or simplified.</p>
<p>To implement the method in practice a plurality of status data records of a surveillance object are collected. The status data records comprise in each case an object position and a size of the surveillance object, measured at the object position, in an image of the image sequence.</p>
<p>The object position is preferably measured in image coordinates, i.e. for example pixel coordinates, and the measured size is likewise measured in pixel size or units equivalent and/or proportional to this. The measured size of the surveillance object thus corresponds in particular to the height of the representation of the surveillance object in the image of the image sequence. In particular status data records of the surveillance object are used that were obtained on the basis of a tracking of this surveillance object.</p>
<p>Preferably status data records of a surveillance object of a class are used whose average or normal size is known, for example relating to the class "persons". Alternatively or in addition status data records of several different surveillance objects belonging to a class are collected, which include a common or normal size. For example only persons, in particular adults, are included as surveillance objects.</p>
<p>According to the invention a decision regarding a visual obstacle is made by comparing the measured size of the surveillance object with a modelled perspective size of the surveillance object at the same object position.</p>
<p>The invention is thus based on the consideration that the measured size of the surveillance object changes only rationally depending on the object position in the image of the image sequence. On the one hand a change occurs on account of perspective effects. Thus, a surveillance object whose object position is arranged in the foreground of a surveillance scene appears larger in the image of the image sequence than the same surveillance object whose object position is in the background of the surveillance scene. This generally known connection is trivial and independent of any visual obstacles. A further reason for a change in the measured size of the surveillance object lies in the fact that the surveillance object is partly obscured by a visual obstacle, and is only partly visible.</p>
<p>If now in a first step a model of the perspective behaviour of the surveillance object is produced depending on its object position in the images of the image sequence of a surveillance scene, then in a second step, by comparing the measured size of the surveillance object with the modelled perspective size of the surveillance object in each case at the same object position, a decision can be made regarding a visual obstacle, and in particular a visual obstacle is recognised if the measured size is smaller than the modelled perspective size of the surveillance object.</p>
<p>In a suitable realisation of the method the X-Y coordinates of the foot of the object are used as object positions, and/or the height of the surveillance object in the image is used as the measured size. The height of the surveillance object preferably corresponds to the physical height of the surveillance object in the surveillance scene, thus for example to the body size of a person. A possible mode of conversion is to represent approximately the surveillance objects by data processing technology by a rectangle in the image of the image sequence, and to use as the foot of the object the mid point of the lower edge of the approximated rectangle and to use as measured size the height of the rectangle.</p>
<p>In a preferred modification of the method the and/or other collected status data records are used to generate the model for the perspective size of the surveillance object depending on the object position in the images of the image sequence. In other words, a model is produced in which a modelled perspective size is provided for each or virtually each object position in the images of the image sequence.</p>
<p>Alternatively, in this model only one imaging factor is modelled for each object position in the images of the image sequence. By using measured status data records it is possible to produce the model without a large calibration effort, in particular in the form of a self-calibration or internal calibration. As an alternative to this or some other form of a self-calibration of the image processing method, it is also conceivable to input manually the model for the perspective size or imaging factor.</p>
<p>In a preferred embodiment of the image processing method the production of the model is achieved by approximating a principal straight line and/or a principal plane to the measured size, as a function of the respective object positions.</p>
<p>A possible practical realisation is to plot the measured sizes as a histogram as a function of the object position and to approximate the principal straight line or the principal plane to the measured sizes. Conveniently only status data records of unobscured surveillance objects are taken into account in the approximation. The present status data records are in particular classified, and specifically into status data records with object positions at which the surveillance object is unobscured, and into status data records with object positions at which the surveillance object is partly obscured by a visual obstacle. The classification is preferably performed by identifying a principal straight line and/or a principal plane for status data records in which the surveillance objects are located at an object position in which they are visible in full size and thus unobscured.</p>
<p>In another modification of the method status data records which lie next to the approximated principal straight line and/or principal plane are interpreted as status data records with surveillance objects that are at least partly obscured by visual obstacles. In particular the status data records of partly obscured surveillance objects form subsidiary straight lines or subsidiary assemblages consisting of the status data records, in which the measured size of the surveillance objects is less than the size plotted on the principal straight line at the same object position.</p>
<p>Advantageously it is envisaged that by using a plurality of status data records of partly obscured surveillance objects, conclusions can be drawn concerning a visual obstacle, in which preferably the totality of the existing status data records of a class of surveillance objects are investigated for the existence of an assemblage of status data records of partly obscured surveillance objects. In particular one or every identified subsidiary straight line corresponds to a visual obstacle in the surveillance scene.</p>
<p>In an advantageous modification the visual obstacles are used to generate a scene model of the surveillance scene.</p>
<p>For this purpose a conclusion is drawn for example from each identified subsidiary straight line and/or assemblage as to whether there is a visual obstacle in the corresponding position in the surveillance scene.</p>
<p>Apart from a generation of the model for the perspective size, the comparison between the measured size of the surveillance object and a modelled perspective size of the surveillance object can also be used to update or to verify the scene model. In this way it is ensured that if there is a change in position of a visual obstacle, the scene model is correspondingly adapted.</p>
<p>Preferably the discovered visual obstacles and/or the generated scene model are taken into account in a tracking operation. With a knowledge of the visual obstacles and the scene model, it is possible to predict a change in size of the surveillance object on account of the shadow cast by a visual obstacle.</p>
<p>In an expedient configuration used to obtain information on visual obstacles or on the scene model, a size filtering procedure is used in the tracking, which compensates as regards size the measured size of the surveillance objects taking into account the visual obstacles and/or the scene model.</p>
<p>To summarise, the advantages of the method according to the invention consist especially in the fact that after a suitable training time static or quasi-static obscurations in surveillance scenes can be recognised and surveillance objects that are only partly visible behind the obscurations can be processed corresponding to their actual, but not visible size.</p>
<p>In particular the proposed invention automatically detects existing obstacles in the scene, whereby preferably a prior manual input of information can be dispensed with. By knowing at which points in the surveillance scene objects are only partly visible, the tracking operation can easily be improved. In a practical implementation a size filtering of the surveillance objects to be detected is suitably adapted so that partly visible objects are size compensated for their obscured portion.</p>
<p>In addition a video surveillance system for carrying out the image processing method described above is proposed by the features of claim 11.</p>
<p>The video surveillance system includes a detection device that is designed for the collection of a plurality of status data records, of one or more surveillance objects in images of an image sequence. Preferably the surveillance objects belong to a common class, whose average or normal size is known. In particular the surveillance objects are only persons.</p>
<p>In addition the video surveillance system includes an evaluation device that is designed to detect and/or verify visual obstacles in the surveillance zone, by comparing the measured size of the surveillance object in an image of the image sequence of a surveillance scene, to a modelled, perspective size of the surveillance object at the same position.</p>
<p>Preferably the video surveillance system is designed as a computer system with means for connection by an electrical circuit to one or more video cameras. In particular the video cameras serve for the static observation of a surveillance environment, in other words with a static observation region. With modifications of the method and/or of the device, the surveillance cameras can also be movably arranged so that a larger surveillance environment can be monitored.</p>
<p>The invention also relates to a computer program with program code means for executing all steps of the aforedescribed image processing method when the program is run on a computer and/or on the aforedescribed video surveillance system.</p>
<p>Brief description of the drawings</p>
<p>Further features, advantages and effects of the present invention follow from the following description of a preferred embodiment, in which: Fig. 1 is a functional block diagram of a first embodiment of the video surveillance system, in which the functional block diagram is shown in conjunction with the operational sequence of an embodiment of a method according to the invention; Fig. 2 shows exemplary measurement results based on the video surveillance system and method illustrated in Fig. 1.</p>
<p>Embodiment(s) of the invention Figure 1 shows on the left-hand side a video surveillance system 1 as a first embodiment of the invention, which is connected up by a suitably configured circuit arrangement to one or more surveillance cameras 2. The circuit arrangement may be configured as a direct cabling, or alternatively a network connection is possible, in particular via the internet. The surveillance cameras 2 are designed as static surveillance cameras, in other words they do not change their respective observation region during the surveillance operation.</p>
<p>The video surveillance system 1 includes as components an object tracker 3, an evaluation device 4 as well as a model generator 5. To illustrate the mode of operation of the video surveillance system 1, a flow diagram to illustrate the process sequence is given in the righthand side of Fig. 1, process steps of the process sequence being associated with the components 3, 4 and 5 of the video surveillance system 1 by means of curly brackets.</p>
<p>In a first step A video sequences of a surveillance scene taken with the surveillance cameras 2 are transmitted to the object tracker 3. Alternatively or in addition the video sequences can also be obtained from a storage device, such as for example a video recorder or a data bank. A surveillance object (not shown) is detected in the object tracker 3 and is tracked via the individual images of one of the video sequences. As is also explained hereinafter, the detection and the tracking optionally takes place using a scene model. In particular, in the initial calibration of the video surveillance system 1 the step A is carried out without knowledge of the scene model.</p>
<p>In a second step B status data records of the surveillance object are collected, in which each status data record includes an object position and an object size of the surveillance object in an image of the video sequence. The object position and the object size are given in pixel coordinates or in coordinates or units equivalent thereto.</p>
<p>In a next stage C the collected status data are plotted on a two-dimensional or three-dimensional histogram, the height of the histogram being given by the object size.</p>
<p>Such a histogram is shown for example in Fig. 2 and is described in more detail later.</p>
<p>In a following step D the histogram generated in this way is evaluated, in which in the case of two-dimensional histograms a principal straight line or in the case of three-dimensional histograms a principal plane, are fitted into the collected status data records. In particular the principal straight line or the principal plane is identified in the histogram, the principal straight line or principal plane being based on a selection of status data records which relates to unobscured surveillance objects visible in their full size. Following this, subsidiary straight lines, subsidiary planes or subsidiary segments are identified, which consist of status data records in which the surveillance object has an object size that is smaller than the object size defined by the principal straight line or principal plane at the same object position.</p>
<p>In a further step E, in the model generator 5 conclusions are drawn as regards visual obstacles in the surveillance scene on the basis of the identified subsidiary straight lines, subsidiary planes or subsidiary segments, wherein by evaluating the pixel coordinates corresponding positions in the surveillance scene for visual obstacles are allocated to the status data records forming the subsidiary straight lines, subsidiary planes or subsidiary segments. As a result information is obtained on the position and/or the size of the visual obstacles in the surveillance scene behind which the surveillance object is only partly visible.</p>
<p>The totality of the visual obstacles detected in this way are used in a step F to generate a scene model.</p>
<p>In the further course of the method the scene model is used in the surveillance operation in order to improve the tracking, for example by suitably matching a size filtering of surveillance objects to be detected so that partly visible objects are compensated in size by their obscured portion.</p>
<p>For the further illustration of the method according to the invention, exemplary measurement results are shown in Fig. 2. The left-hand side with the heading Scene reference shows the image of an image sequence of a surveillance scene, in which the surveillance scene is illustrated as an office under surveillance. In the illustration, in the office a first writing desk 7 is arranged at the lower right-hand corner and a second writing desk 8 is arranged roughly in the middle on the left-hand side. These two writing desks 7 and 8 form typical visual obstacles in the image sequence of a surveillance scene, since a surveillance object, for example a person moving from the free middle region of the office to behind the writing desk 7 or 8 is partly hidden by the said writing desk 7 or 8. When tracking this person this partial obscuration leads to a sudden change in size of the surveillance object.</p>
<p>In order to recognise or detect such visual obstacles, measured status data records of the movement trajectories of the surveillance object are plotted on a histogram, which is shown by way of example in the right-hand side of Fig. 2 as a two-dimensional histogram 9. The vertical axis denotes the Y-position of the surveillance object in the image of the image sequence and the horizontal axis denotes the measured size or height of the surveillance object.</p>
<p>The origin of the axes is located in the upper left-hand corner of the diagram. The status data records are plotted as individual measurement points in the histogram 9 so that initially only a measurement point "cloud" can be recognised.</p>
<p>A principal straight line 10 is identified in this measurement point cloud by means of suitable algorithms, which corresponds to the normal perspective change in the object size starting from the foreground of the surveillance scene and continuing to the background of the surveillance scene. For optical geometric reasons the perspective change in size is shown as a straight line.</p>
<p>The remaining measurement points of status data records relate to object positions with associated object sizes, in which the object sizes are made smaller than the object sizes shown by the principal straight line 10. These remaining measurement points are collected into segments, a subsidiary straight line 11 being shown by way of example in Fig. 2. This subsidiary straight line 11, which comprises object sizes smaller than the corresponding object sizes of the principal straight line 10, is the result of the surveillance object being obscured by the first writing desk 7.</p>
<p>By knowing the subsidiary straight line 11 and in addition by knowing the X coordinate of the associated status data records, it is possible to allocate to the surveillance scene 6 a visual obstacle which corresponds in height and position to the first writing desk 7. By using the same procedure a second visual obstacle can be modelled, corresponding to the second writing desk 8. These and further visual obstacles are plotted in a scene model, which -as discussed hereinbefore -can advantageously be used to improve the tracking operation, in particular by adaptation of a size filtering.</p>
<p>To conclude, it should also be pointed out that the evaluation of the image or video sequences was discussed only by way of example, and in particular it is possible to adapt another geometrical shape or figure instead of the principal straight lines and subsidiary straight lines to the collected status data records, especially if this corresponds better to the optical and geometrical circumstances.</p>

Claims (2)

  1. <p>Patent Claims 1. Video surveillance method for the recognition and
    processing of visual obstacles (7, 8) in a surveillance scene (6) in an image sequence, in which a plurality of status data records of a surveillance object are collected, which in each case include an object position (x, y) and a size (h) of the surveillance object measured at the object position in an image of the image sequence, characterised in that by comparing the measured size (h) of the surveillance object to a modelled perspective size of the surveillance object at the same object position (x, y) conclusions can be drawn as regards one of the visual obstacles (7, 8)
  2. 2. Method according to claim 1, characterised in that the status data records are used to generate a model for the perspective size of the surveillance object as a function of the object position (x, y) in the images of the image sequence.</p>
    <p>3. Method according to claim 2, characterised in that the generation of the model includes the approximation of a principal straight line (10) and/or a principal plane to the measured sizes (h) as a function of the object position (x, y) 4. Method according to claim 3, characterised in that in the approximation only status data records with unobscured surveillance objects are taken into consideration.</p>
    <p>5. Method according to one of the preceding claims 2 to 4, characterised in that status data records which lie next to the approximated principal straight line (10) and/or principal plane are interpreted as status data records of a surveillance object that is partly obscured by one of the visual obstacles.</p>
    <p>6. Method according to claim 5, characterised in that by using a plurality of status data records of partly obscured surveillance objects, conclusions are drawn as regards one of the visual obstacles (7, 8).</p>
    <p>7. Method according to one of the preceding claims, characterised in that the visual obstacles (7, 8) are used to generate a scene model.</p>
    <p>8. Method according to claim 7, characterised in that the scene model is updated or verified by evaluating status data records of further surveillance objects.</p>
    <p>9. Method according to one of the preceding claims, characterised in that the visual obstacles (7, 8) and/or the scene model are taken into account in a tracking operation.</p>
    <p>10. Method according to claim 9, characterised in that in the tracking operation a size filtering is employed which compensates in size the measured size of the surveillance objects taking into account the visual obstacles.</p>
    <p>11. Video surveillance system (1) for carrying out the image processing method according to one of the preceding claims, with a detection device (3) which is designed to collect a plurality of status data records of a surveillance object, which comprise in each case an object position (x, y) and a size of the surveillance object measured at the object position in an image of the image sequence of the surveillance scene (6), characterised by an evaluation device (4) designed for the detection and/or verification of visual obstacles in the surveillance scene (6) by comparing the measured size of the surveillance object with a modelled perspective size of the surveillance object at the same object position.</p>
    <p>12. Video surveillance system (1) according to claim 11, characterised in that the video surveillance system can be coupled and/or is coupled to one or more video cameras (2), the video cameras (2) being designed for the static observation of a surveillance environment.</p>
    <p>13. Computer program with program code means for carrying out all steps of the method according to one or all of claims 1 to 10 when the program is run on a computer and/or a system (1) according to one of claims 11 and 12.</p>
    <p>14. A video surveillance method substantially as hereinbefore described with reference to the accompanying drawings.</p>
    <p>15. A video surveillance system substantially as hereinbefore described with reference to the accompanying drawings.</p>
    <p>16. A computer program substantially as hereinbefore described with reference to the accompanying drawings.</p>
GB0711019A 2006-06-12 2007-06-07 Image processing method, video surveillance system, and computer program Expired - Fee Related GB2439184B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
DE102006027120A DE102006027120A1 (en) 2006-06-12 2006-06-12 Image processing method, video surveillance system and computer program

Publications (3)

Publication Number Publication Date
GB0711019D0 GB0711019D0 (en) 2007-07-18
GB2439184A true GB2439184A (en) 2007-12-19
GB2439184B GB2439184B (en) 2008-11-26

Family

ID=38318963

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0711019A Expired - Fee Related GB2439184B (en) 2006-06-12 2007-06-07 Image processing method, video surveillance system, and computer program

Country Status (2)

Country Link
DE (1) DE102006027120A1 (en)
GB (1) GB2439184B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110304729A1 (en) * 2010-06-11 2011-12-15 Gianni Arcaini Method for Automatically Ignoring Cast Self Shadows to Increase the Effectiveness of Video Analytics Based Surveillance Systems

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102008002275A1 (en) 2008-06-06 2009-12-10 Robert Bosch Gmbh Image processing device with calibration module, method for calibration and computer program
CN105046874B (en) * 2015-07-07 2018-05-15 合肥指南针电子科技有限责任公司 A kind of intelligent screen monitoring system of public security prison institute

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6185314B1 (en) * 1997-06-19 2001-02-06 Ncr Corporation System and method for matching image information to object model information
US20040119848A1 (en) * 2002-11-12 2004-06-24 Buehler Christopher J. Method and apparatus for computerized image background analysis

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6185314B1 (en) * 1997-06-19 2001-02-06 Ncr Corporation System and method for matching image information to object model information
US20040119848A1 (en) * 2002-11-12 2004-06-24 Buehler Christopher J. Method and apparatus for computerized image background analysis

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110304729A1 (en) * 2010-06-11 2011-12-15 Gianni Arcaini Method for Automatically Ignoring Cast Self Shadows to Increase the Effectiveness of Video Analytics Based Surveillance Systems
US8665329B2 (en) * 2010-06-11 2014-03-04 Gianni Arcaini Apparatus for automatically ignoring cast self shadows to increase the effectiveness of video analytics based surveillance systems

Also Published As

Publication number Publication date
GB2439184B (en) 2008-11-26
DE102006027120A1 (en) 2007-12-13
GB0711019D0 (en) 2007-07-18

Similar Documents

Publication Publication Date Title
EP1665127B1 (en) Method and apparatus for computerized image background analysis
DE69832119T2 (en) Method and apparatus for the visual detection of people for active public interfaces
US9602778B2 (en) Security video system using customer regions for monitoring point of sale areas
KR101788269B1 (en) Method and apparatus for sensing innormal situation
JP2007080262A (en) System, method and program for supporting 3-d multi-camera video navigation
JPH05501770A (en) Dynamic object recognition method and its image processing system
CN109886129B (en) Prompt message generation method and device, storage medium and electronic device
CN106529500A (en) Information processing method and system
WO2022160592A1 (en) Information processing method and apparatus, and electronic device and storage medium
KR20110035662A (en) Intelligent image search method and system using surveillance camera
JP2010057105A (en) Three-dimensional object tracking method and system
JPH10255057A (en) Mobile object extracting device
US20240096094A1 (en) Multi-view visual data damage detection
KR101513414B1 (en) Method and system for analyzing surveillance image
US20240112301A1 (en) Vehicle undercarriage imaging
GB2439184A (en) Obstacle detection in a surveillance system
Collazos et al. Abandoned object detection on controlled scenes using kinect
Yao et al. Multi-camera 3d person tracking with particle filter in a surveillance environment
WO2022022809A1 (en) Masking device
KR102152319B1 (en) Method of calculating position and size of object in 3d space and video surveillance system using the same
JPH09159413A (en) Image processor and modeling unit
Fleck et al. SmartClassySurv-a smart camera network for distributed tracking and activity recognition and its application to assisted living
CN110675504B (en) Augmented reality system and method
Baklouti et al. Virtu4D: A dynamic audio-video virtual representation for surveillance systems
TW202305646A (en) Masking device

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20230607