EP1010130A4 - Niedrigfalschalarmraten-videosicherheitssystem unter verwendung von objektklassifizierung - Google Patents

Niedrigfalschalarmraten-videosicherheitssystem unter verwendung von objektklassifizierung

Info

Publication number
EP1010130A4
EP1010130A4 EP97954298A EP97954298A EP1010130A4 EP 1010130 A4 EP1010130 A4 EP 1010130A4 EP 97954298 A EP97954298 A EP 97954298A EP 97954298 A EP97954298 A EP 97954298A EP 1010130 A4 EP1010130 A4 EP 1010130A4
Authority
EP
European Patent Office
Prior art keywords
scene
security system
intruder
human
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP97954298A
Other languages
English (en)
French (fr)
Other versions
EP1010130A1 (de
Inventor
John R Wootton
Gary S Waldman
Gregory L Hobson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Esco Technologies Inc
Original Assignee
Esco Electronics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US08/772,731 external-priority patent/US5956424A/en
Priority claimed from US08/772,595 external-priority patent/US5937092A/en
Application filed by Esco Electronics Corp filed Critical Esco Electronics Corp
Publication of EP1010130A1 publication Critical patent/EP1010130A1/de
Publication of EP1010130A4 publication Critical patent/EP1010130A4/de
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/19604Image analysis to detect motion of the intruder, e.g. by frame subtraction involving reference image or background adaptation with time to compensate for changing conditions, e.g. reference image update on detection of light level change
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/19606Discriminating between target movement or movement in an area of interest and other non-signicative movements, e.g. target movements induced by camera shake or movements of pets, falling leaves, rotating fan
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/1961Movement detection not involving frame subtraction, e.g. motion detection on the basis of luminance changes in the image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • This invention relates to video security systems and a method for detecting the presence of an intruder into an area being monitored by the system; and more particularly, to i) the rejection of false alarms which might otherwise occur because of global or local, natural or manmade, lighting changes which occur within a scene observed by the system, ii) the discernment
  • a security system of the invention uses a video camera as the principal sensor and processes a resulting image to determine the presence or non- presence of an intruder.
  • the fundamental process is to establish a reference scene known, or assumed, to have no intruder(s) present.
  • An image of the present scene, as provided by the video camera, is compared with an image of
  • the system and method operate to first eliminate possible sources of false alarms, and to then classify any remaining
  • the video security system and image processing methodology as described herein recognizes anomalies resulting from these other causes so these, too, can be accounted for.
  • the method includes comparing, on a pixel by pixel basis, the current image with the reference image to obtain a difference image.
  • any nonzero pixel in the difference image indicates the possible presence of an intrusion, after image artifacts such as noise, aliasing of the video, and movement within the scene not attributable to a life form (animal or human) such as the hands of a clock, screen savers on computers, oscillating fans, etc., have been accounted for.
  • image artifacts such as noise, aliasing of the video
  • the system and method use an absolute difference technique with pixel by pixel subtraction, the process is sensitive to surface differences between the scene but insensitive to light on dark or dark on light changes, and thus is very sensitive to any intrusion within the scene.
  • each pixel represents a gray level measure of the scene intensity that is reflected from that part of the scene. The gray level intensity can alter for a variety of reasons, the most relevant of these being that there is a
  • Two important features of the video security system is to inform an operator/verifier of the presence of a human intruder, and to not generate false alarms.
  • the system must operate to eliminate as many false alarms as possible without impacting the overall probability of detecting an intruder's presence.
  • a fundamental cause of false alarms stems from the sensor and methodology used to ascertain if an intrusion has occurred. By use of the processing methodology described herein, various effects which could otherwise trigger false alarms are accounted for so that only a life form intruding into the scene will produce an alarm.
  • Gray level intensity can change for a variety of reasons, the most important being a new physical presence within a particular part of the scene. Additionally, the intensity will change at that location if the overall lighting of the total scene changes (a global change), or the lighting at this particular part of the scene changes (a local change), or the AGC (automatic gain control) of the camera changes, or the ALC (automatic light level) of the camera changes. With respect to global or local lighting changes, these can result from natural lighting changes or manmade lighting changes. Finally, there will be a difference of gray level intensity at a pixel level if there is noise present in the video. Only the situation of a physical presence in the scene is a true alarm; the remainder all comprise false alarms within the system.
  • the system For a security system to be economically viable and avoid an unduly high load on an operator who has to verify each alarm, the system must process images in a manner which eliminates as many of false alarms as possible without impacting the overall probability of detecting the presence of an intruder.
  • U.S. patent 5,289,275 to Ishii et al. is directed to a surveillance monitoring system using image processing for monitoring fires and thefts.
  • the patent teaches use of a color camera for monitoring fires and a method of comparing the color ratio at each pixel in an image to estimate the radiant energy represented by each pixel. A resulting ratio is compared to a threshold with the presence of a fire being indicated if the threshold is surpassed.
  • a similar technique for detecting the presence of humans is also described.
  • the patent teaches the use of image processing together with a camera to detect the presence of fires and abnormal objects.
  • U.S. patent 4,697,097 to Yausa et al. also teaches use of a camera to detect the presence of an object.
  • the system automatically dials and sends a difference image, provided the differences are large enough, to a remote site over a telephone line.
  • the image is viewed by a human. While teaching some aspects of detection, Yausa et al. does not go beyond the detection process to attempt and use image processing to recognize that the anomaly is caused by a human presence.
  • U.S. patent 4,257,063 which is directed to a video monitoring system and method, teaches that a video line from a camera can be compared to the same video line viewed at an earlier time to detect the presence of a human.
  • the detection device is not a whole image device, nor does it make any compensation for light changes, nor does it teach attempting to automatically recognize the contents of an image as being derived from a human.
  • U.S. patent 4,161,750 teaches that changes in the average value of a video line can be used to detect the presence of an anomalous object. Whereas the implementation is different from the '063 patent, the teaching is basically the same.
  • a video security system and method for visually monitoring a scene and detecting the presence of an intruder within the scene the provision of such a system and method whose operation is based upon the premise that only the presence of a human intruder is of consequence to the security system, with everything else constituting a false alarm; the provision of such a system and method to readily distinguish between changes within the scene caused by the presence of a person entering the scene as opposed to changes within the scene resulting from lighting changes (whether global or local, natural or man made) and other anomalies which occur within the scene to detect the presence of an intruder; the provision of such a system and method to employ a recognition process rather than an abnormality process such as used in other systems to differentiate between human and non-human objects, so to reduce or substantially eliminate false alarms; the provision of such a system and method to provide a high probability of detection of the presence of a human, while having a low probability of false alarms; the provision of such a system
  • a video detection system detects the presence of an intruder in a scene from video provided by a camera observing the scene.
  • a recognition process differentiates between human and non-human (animal) life forms. The presence of a human is determined with a high degree of confidence so there is a very low probability of false alarms. Possible false alarms resulting from the effects noise, aliasing, non-intruder motion occurring within the scene, and the effects of global or local lighting are first identified and only then is object recognition performed.
  • Performing object recognition includes determining which regions within the image may be an intruder, outlining and growing those regions so the result encompasses all of what may be the intruder, determining a set of shape features from the region and eliminating possible shadow effects, normalizing the set of features and comparing the resulting set with sets of features for humans and non- human (animal) life forms.
  • the result of the comparison produces a confidence level as to whether or not the intruder is a human. If the confidence level is sufficiently high, an alarm is given.
  • Fig. 1 is a simplified block diagram of a video security system of the present invention for viewing a scene and determining the presence of an intruder in the scene;
  • Fig. 2 is a representation of an actual scene viewed by a camera of the system
  • Fig. 3 is the same scene as Fig. 2 but with the presence of an intruder
  • Fig. 4 is a representation of another actual scene under one lighting condition
  • Fig. 5 is a representation of the same scene under different lighting conditions and with no intruder in the scene;
  • Fig. 6A is a representation of the object in Fig. 3 including its shadow
  • Fig. 6B illustrates outlining and segmentation of the object
  • Fig. 6C illustrates the object with its shadow removed and as resampled for determining a set of features for the object
  • Figs. 7A-7C represent non-human (animal) life forms with which features of the object are compared to determine if the object represents a human or non-human life form and wherein Fig. 7A represents a cat, Fig. 7B a dog, and Fig. 7C a bird;
  • Figure 8 is a simplified time line indicating intervals at which images of the scene are viewed by the camera system
  • Figure 9 represents a pixel array such as forms a portion of an image; and, Fig. 10 illustrates masking of an image for those areas within a scene where fixed objects having an associated movement or lighting change are located.
  • a video security system of the invention is indicated generally 10 in Fig. 1.
  • the system employs one or more cameras Cl- Cn each of which continually views a respective scene and produces a signal representative of the scene.
  • the cameras may operate in the visual or infrared portions of the light spectrum and a video output signal of each camera is supplied to a processor means 12.
  • Means 12 processes each received signal from a camera to produce an image represented by the signal and compares the image representing the scene at one point in time with a similar image of the scene at a previous point in time.
  • the signal from the imaging means represented by the cameras may be either an analog or digital signal, and processing means 12 may be an analog, digital, or hybrid processor.
  • FIG. 2 an image of a scene is shown, the representation being the actual image produced by a camera C.
  • Fig. 2 represents, for example, a reference image of the scene.
  • Fig. 3 is an image exactly the same as that in Fig. 2 except that now a person (human intruder) has been introduced into the scene.
  • Fig. 3 is again an actual image produced by a camera C.
  • Fig. 4 represents a reference image of a scene
  • Fig. 5 a later image in which there is a lighting change but not an intrusion.
  • the system and method of the invention operate to identify the presence of such a human intruder and provide an appropriate alarm. However, it is also a principal feature of the invention to not produce false alarms.
  • a single processor can handle several cameras positioned at different locations within a protected site. In use, the processor cycles through the different cameras, visiting each at a predetermined interval. At system power-up, the processor cycles through all of the cameras doing a self-test on each. One important test at this time is to record a reference frame against which later frames will be compared. A histogram of pixel values is formed from this reference frame.
  • a reference frame fl is created. Throughout the monitoring operation, this reference frame is continuously updated if there is no perceived motion within the latest image against which a reference image is compared. At each subsequent visit to the camera a new frame f2 is produced and subtracted from the reference. If the difference is not significant, the system goes on to the next camera. However, if there is a difference, frame f2 is stored and a third frame f3 is created on the next visit and compared to both frames fl and f2. Only if there is a significant difference between frames f3 and f2 and also frames f3 and fl, is further processing done.
  • This three frame procedure eliminates false alarms resulting from sudden, global light changes such as caused by lightning flashes or interior lights going on or off.
  • a lightning flash occurring during frame f2 will be gone by frame f3, so there will be no significant difference between frame f3 and fl .
  • the interior lights have simply gone on or off between frames fl and f2, there will be no significant changes between frames £2 and O. In either instance, the system proceeds on to the next camera with no more processing.
  • Significant differences between frames fl and f2, frames f3 and f2, and frames f3 and fl indicate a possible intrusion requiring more processing.
  • non- intruder motion occurring within the scene is also identified so as not to trigger processing or cause false alarms.
  • movement of the fan blades would also appear as a change from one image to another.
  • the fan is an oscillating fan, its sweeping movement would also be detected as a difference from one image to another.
  • the area within the scene where an object having an associated movement is generally fixed and its movement is spatially constrained movement, the area where this movement occurs is identified and masked so, in most instances, motion effects resulting from operation of the object (fan) are disregarded.
  • Any video alert system which uses frame-to-frame changes in the video to detect intrusions into a secured area is also vulnerable to false alarms from the inadvertent (passing automobile lights, etc.) or deliberate (police or security guard flashlights) introduction of light into the area, even though no one has physically entered the area.
  • the system and method of the invention differentiate between a change in a video frame due to a change in the irradiation of the surfaces in the FOV (field of view) as in Fig. 5, and a change due to the introduction of a new reflecting surface in the FOV as in Fig. 3.
  • the former is then rejected as a light "intrusion" requiring no alarm, whereas the latter is identified as a human intruder for which an alarm is given.
  • the alias process is caused by sampling at or near the intrinsic resolution of the system. As the system is sampled at or near the Nyquist frequency, the video, on a frame by frame basis, appears to scintillate, and certain areas will produce Moire like effects. Subtraction on a frame by frame basis would cause multiple detections on scenes that are unchanging. In many applications where this occurs it is not economically possible to over sample. Elimination of aliasing effects is accomplished by convolving the image with an equivalent two-dimensional (2D) smoothing filter. Whether this is a 3 x 3 or 5 x 5 filter, or a higher filter, is a matter of preference as are the weights of the filter. DETECTION PROCESS
  • the detection process consists of comparing the current image to a reference image. To initialize the system it is assumed that the operator has control over the scene and, therefore, will select a single frame for the reference when there is nothing present. (If necessary, up to 60 successive frames can be selected and integrated together to obtain an averaged reference image). As shown in Fig. 1, apparatus 10 employs multiple cameras Cl-Cn, but the methodology with respect to one camera is applicable for all cameras. For each camera, an image is periodically selected and the absolute difference between the current image (suitably convolved with the antialiasing filter) and the reference is determined. The difference image is then thresholded (an intensity threshold) and all of the pixels exceeding the threshold are accumulated.
  • an intensity threshold an intensity threshold
  • This step eliminates a significant number of pixels that otherwise would result in a non-zero result simply by differencing the two images.
  • Making this threshold value adaptive within a given range of threshold values ensures consistent perfo ⁇ nance. If the count of the pixels exceeding the intensity threshold exceeds a pixel count threshold, then a potential detection has occurred. At this time, all connected hit pixels (pixels that exceed the intensity threshold) are segmented, and a count of each segmented object is taken. If the pixel count of any object exceeds another pixel count threshold, then a detection is declared. Accordingly, detection is defined as the total number of hit pixels in the absolute difference image being large and there is a large connected object in the absolute difference image.
  • Noise induced detections are generally spatially small and distributed randomly throughout the image.
  • the basis for removing these events is to ascertain the size (area) of connected pixels that exceed the threshold set for detection. To achieve this, the region where the detected pixels occur is grown into connected "blobs". This is done by region growing the blobs. After region growing, those blobs that are smaller in size than a given size threshold are removed as false alarms.
  • a region growing algorithm starts with a search for the first object pixel as the outlining algorithm does. Since searching and outlining has already been performed, and since the outline pixels are part of the segmented object, these do not need to be region grown again.
  • Outline pixel arrays are now placed on a stack, and the outline pixels are zeroed out in the absolute difference image. A pixel is then selected (removed from the stack) and the outline pixels are zeroed out in the absolute difference image.
  • the selected pixel P and all of its eight neighbors P1-P8 are examined to see if hit points occur (i.e. they are non- zero). If a neighbor pixel is non-zero, then it is added to the stack and zeroed out in the absolute difference image.
  • region growing all eight neighboring pixels are examined, whereas in outlining, the examination of neighboring pixels stops as soon as an edge pixel is found. Thus, in outlining, as few as one neighbor may be investigated. The region growing segmentation process stops once the stack is empty.
  • Land's theory was introduced to explain why human observers are readily able to identify differences in surface lightness despite greatly varying illumination across a scene.
  • Land's theory is also applicable to viewing systems which function in place of a human viewer. According to the theory, even if the amount of energy reflected (incident energy times surface reflectance) from two different surfaces is the same, an observer can detect differences in the two surface lightness' if such a difference exists. In other words, the human visual system has a remarkable ability to see surface differences and ignore lighting differences.
  • a video signal (gray level) for any pixel is given by g ⁇ E ( ⁇ ) r (X) S ( ⁇ ) d ⁇ (1) where E( ⁇ ) ⁇ scene spectral irradiance at the pixel in question r( ⁇ ) ⁇ scene spectral reflectance at the pixel in question
  • S( ⁇ ) ⁇ sensor spectral response The constant of proportionality in (1) depends on geometry and camera characteristics, but is basically the same for all pixels in the frame.
  • ratios of adjacent pixel values satisfy the requirement of being determined by scene reflectances only and are independent of scene illumination. It remains to consider the practicality of the approximations used to arrive at (3).
  • a basic assumption in the retinex process is that of only gradual spatial variations in the scene irradiance; that is, we must have nearly the same irradiance of adjacent pixel areas in the scene. This assumption is generally true for diffuse lighting, but for directional sources it may not be. For example, the intrusion of a light beam into the area being viewed can introduce rather sharp shadows, or change the amount of light striking a vertical surface without similarly changing the amount of light striking an adjacent tilted surface.
  • ratios between pixels straddling the shadow line in the first instance, or the surfaces in the second instance will change even though no object has been introduced into the scene.
  • the pixel- to-pixel change is often less than it appears to the eye, and the changes only appear at the boundaries, not within the interiors of the shadows or surfaces.
  • edge mapping Another method, based on edge mapping, is also possible. As in the previous situation, the edge mapping process would be employed after an initial detection stage is triggered by pixel value changes from one frame to the next. Within each detected “blob" area, an edge map is made for both the initial (unchanged) frame and the changed frame that triggered the alert. Such an edge map can be constructed by running an edge enhancement filter (such as a Sobel filter) and then thresholding. If the intrusion is just a light change, then the edges within the blob should be basically in the same place in both frames. However, if the intrusion is an object, then some edges from the initial frame will be obscured in the changed frame and some new edges, internal to the intruding object, will be introduced.
  • edge enhancement filter such as a Sobel filter
  • the basic premise of the variable light rejection algorithm used in the method of the invention is to compare ratios of adjacent pixels from a segmented area in frame fl with ratios from corresponding pixels in frame ⁇ , but to restrict the ratios to those across significant edges. Restricting the processing to ratios of pixels tends to reject illumination changes, and using only edge pixels eliminates the dilution of information caused by large uniform areas.
  • a) Ratios R of adjacent pixels (both horizontally and vertically) in frame fl are tested to determine if they significantly differ from unity: R-l >Tj? or (1/R)-1 >T]?, where T] is a predetermined threshold value. Every time such a significant edge pair is found an edge count value is incremented.
  • Those pixel pairs that pass either of the tests in a) have their corresponding ratios R' for frame ⁇ calculated.
  • SHAPE FEATURES Having outlined and region grown an object to be recognized, a series of linear shape features and Fourier descriptors are extracted for each segmented region. Values for shape features are numerically derived from the image of the object based upon the x, y pixel coordinates obtained during outlining and segmentation of the object. These features include, for example, values representing the height of the object (y max . - y min .), its width (x max . - x min .), horizontal and vertical edge counts, and degree of circularity.
  • Fourier descriptors represent a set of features used to recognize a silhouette or contour of an object. As shown in Fig. 6C, the outline of an object is resampled into equally spaced points located about the edge of the object. The Fourier descriptors are computed by treating these points as complex points and creating a point complex FFT (Fast Fourier Transform) for the sequence. The resulting coefficients are a function of the position, size, orientation, and starting point P of the outline. Using these coefficients, Fourier descriptors are extracted which are invariant to these variables. As a result of performing the feature extractions, what remains is a set of features which now describe the segmented object. FEATURE SET NORMALIZATION
  • the set of features may be rescaled if the range of values for one of the features of the object is larger or smaller than the range which the rest of the features of the object have.
  • a test data base is established and when the feature data is tested on this data base, a feature may be found to be skewed.
  • a mathematical function such as a logarithmic function is applied to the feature value.
  • each feature value may be exercised through a linear function; that is, for example, a constant value is added to the feature value, and the result is then multiplied by another constant value. It will be understood that other consistent descriptors such as wavelet coefficients and fractal dimensions can be used instead of Fourier descriptors.
  • An object classifier portion of the processor means is provided as an input the normalized feature set for the object to be classified.
  • the object classifier has already been provided feature set information for humans as well as for a variety of animals (cat, dog, bird) such as shown in Figs. 7A - 7C. These Figs, show the presence of each animal in an actual scene as viewed by the camera of the system.
  • the classifier can determine a confidence value for each of three classes: human, animal and unknown. Operation of the classifier includes implementation of a linear or non-linear classifier.
  • a linear classifier may, for example implement a Bayes technique, as is well known in the art.
  • a non-linear classifier may employ, for example, a neural net which is also well-known in the art, or its equivalent. Regardless of the object classifier used, operation of the classifier produces a "hard" decision as to whether the object is human, non-human, or unknown. Further, the method involves using the algorithm to look at a series of consecutive frames in which the object appears, perform the above described sequence of steps for each individual frame, and integrate the results of the separate classifications to further verify the result.
  • the processing means in response to the results of the object classification provides an indication of an intrusion if the object is classified as a human. It does not provide any indication if the object is classified as an animal. This prevents false alarms. It will be understood, that because an image of a scene provided by a camera C is evaluated on a continual basis, every one-half second for example, the fact that a human is now present in the scene but the result of the classification process may not identify him as such at one instant, does not mean that the intrusion will be missed. Rather, it only means that the human was not recognized as such at that instant.
  • An alarm when it is given, is transmitted to a remote site such as a central monitoring location staffed by security personnel and from which a number of locations can be simultaneously monitored.
EP97954298A 1996-12-23 1997-12-23 Niedrigfalschalarmraten-videosicherheitssystem unter verwendung von objektklassifizierung Withdrawn EP1010130A4 (de)

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
US77199196A 1996-12-23 1996-12-23
US08/772,731 US5956424A (en) 1996-12-23 1996-12-23 Low false alarm rate detection for a video image processing based security alarm system
US771991 1996-12-23
US772595 1996-12-23
US08/772,595 US5937092A (en) 1996-12-23 1996-12-23 Rejection of light intrusion false alarms in a video security system
US772731 1996-12-23
PCT/US1997/024163 WO1998028706A1 (en) 1996-12-23 1997-12-23 Low false alarm rate video security system using object classification

Publications (2)

Publication Number Publication Date
EP1010130A1 EP1010130A1 (de) 2000-06-21
EP1010130A4 true EP1010130A4 (de) 2005-08-17

Family

ID=27419676

Family Applications (1)

Application Number Title Priority Date Filing Date
EP97954298A Withdrawn EP1010130A4 (de) 1996-12-23 1997-12-23 Niedrigfalschalarmraten-videosicherheitssystem unter verwendung von objektklassifizierung

Country Status (4)

Country Link
EP (1) EP1010130A4 (de)
AU (1) AU5810998A (de)
CA (1) CA2275893C (de)
WO (1) WO1998028706A1 (de)

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NO982640L (no) * 1998-06-08 1999-12-09 Nyfotek As Fremgangsmåte og system for overvåking av et område
GB9822956D0 (en) 1998-10-20 1998-12-16 Vsd Limited Smoke detection
DK1079350T3 (da) * 1999-07-17 2004-02-02 Siemens Building Tech Ag Indretning til rumovervågning
US7479980B2 (en) 1999-12-23 2009-01-20 Wespot Technologies Ab Monitoring system
SE517900C2 (sv) * 1999-12-23 2002-07-30 Wespot Ab Sätt,övervakningssystem och övervakningsenhet för övervakning av en övervakningsplats
US6819353B2 (en) 1999-12-23 2004-11-16 Wespot Ab Multiple backgrounds
US6774905B2 (en) 1999-12-23 2004-08-10 Wespot Ab Image data processing
SE519700C2 (sv) * 1999-12-23 2003-04-01 Wespot Ab Bilddatabehandling
US6940998B2 (en) 2000-02-04 2005-09-06 Cernium, Inc. System for automated screening of security cameras
GB0028162D0 (en) * 2000-11-20 2001-01-03 Sentec Ltd Distributed image processing technology and services
US7212651B2 (en) * 2003-06-17 2007-05-01 Mitsubishi Electric Research Laboratories, Inc. Detecting pedestrians using patterns of motion and appearance in videos
EP1672604A1 (de) * 2004-12-16 2006-06-21 Siemens Schweiz AG Verfahren und Einrichtung zur Detektion von Sabotage an einer Überwachungskamera
US7822224B2 (en) 2005-06-22 2010-10-26 Cernium Corporation Terrain map summary elements
US7526105B2 (en) 2006-03-29 2009-04-28 Mark Dronge Security alarm system
ATE521054T1 (de) 2006-12-20 2011-09-15 Axis Ab Verfahen und einrichtung zur erkennung von sabotage an einer überwachungskamera
US8571261B2 (en) 2009-04-22 2013-10-29 Checkvideo Llc System and method for motion detection in a surveillance video
CN102169614B (zh) * 2011-01-14 2013-02-13 云南电力试验研究院(集团)有限公司 一种基于图像识别的电力作业安全监护方法
CN106878668B (zh) 2015-12-10 2020-07-17 微软技术许可有限责任公司 对物体的移动检测
US10535252B2 (en) 2016-08-10 2020-01-14 Comcast Cable Communications, Llc Monitoring security
GB2557597B (en) * 2016-12-09 2020-08-26 Canon Kk A surveillance apparatus and a surveillance method for indicating the detection of motion
CN110113561A (zh) * 2018-02-01 2019-08-09 广州弘度信息科技有限公司 一种人员滞留检测方法、装置、服务器及系统
JP7415872B2 (ja) * 2020-10-23 2024-01-17 横河電機株式会社 装置、システム、方法およびプログラム

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02171897A (ja) * 1988-12-23 1990-07-03 Matsushita Electric Works Ltd 異常監視装置
JPH07192112A (ja) * 1993-12-27 1995-07-28 Oki Electric Ind Co Ltd 侵入物体認識方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5144685A (en) * 1989-03-31 1992-09-01 Honeywell Inc. Landmark recognition for autonomous mobile robots
US5274714A (en) * 1990-06-04 1993-12-28 Neuristics, Inc. Method and apparatus for determining and organizing feature vectors for neural network recognition
US5493273A (en) * 1993-09-28 1996-02-20 The United States Of America As Represented By The Secretary Of The Navy System for detecting perturbations in an environment using temporal sensor data

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02171897A (ja) * 1988-12-23 1990-07-03 Matsushita Electric Works Ltd 異常監視装置
JPH07192112A (ja) * 1993-12-27 1995-07-28 Oki Electric Ind Co Ltd 侵入物体認識方法

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
GONZALEZ R C ET AL: "IMAGE SEGMENTATION AND DESCRIPTION", DIGITAL IMAGE PROCESSING, 1977, pages 320 - 322,345,34, XP002981665 *
GOUJOU E ET AL: "Human detection with video surveillance system", INDUSTRIAL ELECTRONICS, CONTROL, AND INSTRUMENTATION, 1995., PROCEEDINGS OF THE 1995 IEEE IECON 21ST INTERNATIONAL CONFERENCE ON ORLANDO, FL, USA 6-10 NOV. 1995, NEW YORK, NY, USA,IEEE, US, vol. 2, 6 November 1995 (1995-11-06), pages 1179 - 1184, XP010154890, ISBN: 0-7803-3026-9 *
LAND E H: "THE RETINEX THEORY OF COLOR VISION", SCIENTIFIC AMERICAN, SCIENTIFIC AMERICAN INC. NEW YORK, US, vol. 237, no. 6, December 1977 (1977-12-01), pages 108 - 120,122, XP008041094, ISSN: 0036-8733 *
MECOCCI A ET AL: "IMAGE SEQUENCE ANALYSIS FOR COUNTING IN REAL TIME PEOPLE GETTING INAND OUT OF A BUS", SIGNAL PROCESSING, ELSEVIER SCIENCE PUBLISHERS B.V. AMSTERDAM, NL, vol. 35, no. 2, January 1994 (1994-01-01), pages 105 - 116, XP000435716, ISSN: 0165-1684 *
PATENT ABSTRACTS OF JAPAN vol. 014, no. 438 (P - 1108) 19 September 1990 (1990-09-19) *
PATENT ABSTRACTS OF JAPAN vol. 1995, no. 10 30 November 1995 (1995-11-30) *
See also references of WO9828706A1 *
YOUNG HO KIM ET AL: "An implementation of real-time hardware for moving object detection and discrimination", TENCON '96. PROCEEDINGS., 1996 IEEE TENCON. DIGITAL SIGNAL PROCESSING APPLICATIONS PERTH, WA, AUSTRALIA 26-29 NOV. 1996, NEW YORK, NY, USA,IEEE, US, vol. 2, 26 November 1996 (1996-11-26), pages 961 - 966, XP010236812, ISBN: 0-7803-3679-8 *

Also Published As

Publication number Publication date
CA2275893A1 (en) 1998-07-02
EP1010130A1 (de) 2000-06-21
WO1998028706A1 (en) 1998-07-02
CA2275893C (en) 2005-11-29
AU5810998A (en) 1998-07-17

Similar Documents

Publication Publication Date Title
CA2275893C (en) Low false alarm rate video security system using object classification
US5937092A (en) Rejection of light intrusion false alarms in a video security system
US5956424A (en) Low false alarm rate detection for a video image processing based security alarm system
US6104831A (en) Method for rejection of flickering lights in an imaging system
KR101237089B1 (ko) 랜덤 포레스트 분류 기법을 이용한 산불연기 감지 방법
EP1687784B1 (de) Rauchmeldeverfahren und -vorrichtung
CN101751744B (zh) 一种烟雾检测和预警方法
US20090160657A1 (en) Monitoring system
JP2000513848A (ja) 大域変化に感応しないビデオ動き検出器
US20060170769A1 (en) Human and object recognition in digital video
US20070188336A1 (en) Smoke detection method and apparatus
CN112133052A (zh) 核电厂图像火灾探测方法
WO1998028706B1 (en) Low false alarm rate video security system using object classification
PT1628260E (pt) Processo e dispositivo para a detecção automática de fogos florestais
KR20090086898A (ko) 비디오 카메라를 사용한 연기 검출
Filippidis et al. Fusion of intelligent agents for the detection of aircraft in SAR images
Tan et al. Embedded human detection system based on thermal and infrared sensors for anti-poaching application
CN113593161A (zh) 一种周界入侵检测方法
GB2413231A (en) Surveillance apparatus identifying objects becoming stationary after moving
Mahajan et al. Detection of concealed weapons using image processing techniques: A review
Frejlichowski et al. SmartMonitor: An approach to simple, intelligent and affordable visual surveillance system
JPH0620049A (ja) 侵入者識別システム
WO2001048719A1 (en) Surveillance method, system and module
JPH09293185A (ja) 対象検知装置および対象検知方法および対象監視システム
Frejlichowski et al. Extraction of the foreground regions by means of the adaptive background modelling based on various colour components for a visual surveillance system

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 19990706

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

A4 Supplementary search report drawn up and despatched

Effective date: 20050706

RIC1 Information provided on ipc code assigned before grant

Ipc: 7H 04N 7/18 B

Ipc: 7G 08B 23/00 B

Ipc: 7G 08B 13/194 B

Ipc: 7G 06T 7/20 B

Ipc: 7G 06K 9/46 B

Ipc: 7G 06K 9/20 B

Ipc: 7G 06K 9/00 A

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20050701