GB2423661A - Identifying scene changes - Google Patents

Identifying scene changes Download PDF

Info

Publication number
GB2423661A
GB2423661A GB0504091A GB0504091A GB2423661A GB 2423661 A GB2423661 A GB 2423661A GB 0504091 A GB0504091 A GB 0504091A GB 0504091 A GB0504091 A GB 0504091A GB 2423661 A GB2423661 A GB 2423661A
Authority
GB
United Kingdom
Prior art keywords
intensity
change
segment
image
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0504091A
Other versions
GB0504091D0 (en
Inventor
David Thomas
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to GB0504091A priority Critical patent/GB2423661A/en
Publication of GB0504091D0 publication Critical patent/GB0504091D0/en
Publication of GB2423661A publication Critical patent/GB2423661A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/19606Discriminating between target movement or movement in an area of interest and other non-signicative movements, e.g. target movements induced by camera shake or movements of pets, falling leaves, rotating fan
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Burglar Alarm Systems (AREA)

Abstract

A data processing apparatus for automatically identifying a change in an observed scene. The apparatus is configured to segment an image of the scene into a plurality of segments each comprising a plurality of image elements. A change in intensity of a segment relative to the intensity of that segment in a preceding image is identified, and is determined if the intensity in said segment exceeds its threshold value.

Description

APPARATUS AND METHOD FOR A SURVEILLANCE CAMERA
Background
Research has been ongoing for many years to replicate the human vision system ability to identify, segregate and classify parts of image sequences that are captured by fixed or moveable cameras. The obvious applications that have been drivers for the research are: * Target recognition systems in the military.
* Identification of lawbreakers * Face recognition as an aid to surveillance of terrorist.
* Commercial security surveillance etc. Such systems generally have focused onto the collection and transmission of video images to a central location where large computing power is available to identify and track specific objects of interest. There are a number of published papers that provide details of the algorithms that are utilised within these systems most of which rely on the extraction of the outline of the key feature to be recognised (e.g. the face, the body, the car number plate or the target building) and to then perform a template match to a library of images on a pixel basis. Alternatively one could track an object by first identifying it by matching it to a library of images, then identifying the edge of the object which could then be tracked through the scene.
Research at institutes such as MIT, goaled to break down the discerning features of a face into a subset of features called Eigen Features allowing one to reduce the complex task of face matching down to the sub tasks of matching specific features such as the eyes, mouth etc. Once again the system is targeted at matching a persons features to a stored library of Eigen Features on a by pixel basis. Once one matches specific feature to the library of Eigen Vectors, one can determine with a high degree of probability the identity of the person via a look up
table.
Other image recognition systems rely on a mixture of computer and human intervention, for example where a camera monitors a scene and a complex PC algorithm is located in a central monitoring station identifies a change in the scene. If there is a change identified in the monitored scene the guard is advised to check the specific monitor that is showing the scene. Such a system is acceptable for large enterprises where a full time guard is available 24/7 but is too costly for a home or small business environment.
The advantage of such systems is that they can multiplex many video cameras to a remote location where the expensive human expertise is located. If one tries to apply such systems to the surveillance of a home or small business then it will quickly become apparent that the systems are not able to operate 24 hour per day 7 days per week whilst maintaining acceptable cost / performance requirements. Such systems also require a high bandwidth connection to be maintained from the remote camera to the central computer where the decision making software/ guard is located. Such a high bandwidth connection could be for example a dedicated Ethernet connection, which is expensive to install in a private home, or a WiFi connection, which can be prone to interference.
Alternate systems have been proposed where the camera has a small JR detector built in to detect motion at which time video information from the camera is sent to the PCI guard for analysis. Such systems are prone to false alarms. For example if an animal passes near to the detector or if light is suddenly focused through a window onto an object triggering a false reading on the JR detector is triggered.
In accordance with one embodiment, this invention provides a security surveillance system that can operate autonomously and with a high level of accuracy by introducing a first level of decision-making process close to the image capture element i.e. the CCD or JR camera. The system can be programmed to concentrate the decision making process onto specific areas of the scene (e.g. a door or a window). It can also learn from mistakes! false alarms to improve accuracy over time. This is particularly effective when operating within a stable, fixed or specific environment. The system can also be tuned to only activate when there is a high probability that an intruder is within the scene but to reject with a high probability those instances where there is either a change in lighting conditions or a pet within the scene. The system will only send information in the form of an alarm or a sequence of slow scan images to a central control unit when an intruder is identified. Thus the need for high bandwidth connections and constant monitoring is removed. The central unit can then take the action necessary to forward the information to the home owner or to a security monitoring firm for example via a MMS (Multi-Media Message Service for 2.5G cellphone) video attachment.
Viewed from one aspect, the present invention provides imaging apparatus comprising an image sensor for sensing an image, an image analyser configured to segment a sensed image into a plurality of segments and to monitor, for example on a frame by frame basis, changes in image intensity in said segments. In particular, large changes in image intensity are identified. Optionally or additionally, a rate of change of a segment image intensity per frame is monitored, or changes relative to adjacent or proximal segments identified.
Description of invention
If one examines the way in which human vision operates it is based on a hierarchical search as follows: * The retina comprises a set of non-uniform segments forming concentric rings where as one moves away from the optical centre of the retina the ability to identif' the details of objects reduces. This is not an issue since the human is using the outer field of vision to identify rapid changes that are taking place in the environment. Once a rapid change has been identified we turn our gaze to focus onto the object that has been identified as moving into our field of vision to start to identify the object. i.e. we focus onto the item.
* The next stage of the human identification system is for the image to be captured by the retina and signals representing the image to be sent to the brain via the optic nerves.
* At this stage there is some speculation as to the exact way in which the image is understood but due to the low data rates experienced by the optic nerves and our ability to recognise a partly seen object or objects that are orientated in an unusual way, it is assumed that the system is hierarchical and fuzzy in its nature i.e. we recognize an animal with 4 legs a head and a tail, then classify it as a cat and not a dog and then as the ginger cat which may belong to the house next door.
Conversely, an electronic image plane for example a CCD has a substantially uniform sensitivity across its image plane. The applicant has recognised this and innovatively applied this awareness to configure the operation of a camera which can mimic the operation of human vision but without the need to re-direct the camera.
If one considers the ideal world a camera that monitors a scene within a home or small business should capture the same image 24 hours per day, i. e. a static scene should be defined as a good state.
Modifications to the scene take place on an ongoing basis as part of the normal way of life, for example a change in illumination takes place due to movement of the sun or if dust is moving in the air etc. The challenge facing a surveillance system is to ignore these background events that occur in a regular basis whilst being able to identify intruders that need to be flagged to an expert to determine a course of action.
The Applicant has realised that the basic requirement is to develop a system that can identify and classify unusual' occurrences, that can act autonomously, that has a high immunity to noise, that can take into account (and ignore) slow changes in the environment and that can be easily be fine tuned to identify intruders.
If one takes the scene where an intruder is present then it can be described as an object moving from one part of the image to another on a frame-by-frame basis.
i.e. the object appears to be displaced on a frame by frame basis relative to the background static scene. The challenge is to identify (and classify) an object that has moved from frame to frame.
In accordance with an embodiment this invention defines rather than tries to track interframe movements. Instead of analysing each pixel on a frameby-frame basis the system defines a grid of pixels through which transitions are monitored. One can think of this as being similar to the laser beams protecting high security systems where if one breaks the beam the alarm goes off. The pixel grid that one monitors could be of any form e.g. vertical, horizontal, cross hatch, diagonal, focused onto the boundary of specific objects or just a sequence of dots.
The system then monitors the pixel information on the grid lines on a frame by frame basis. If there is a static situation the interframe pixel information should be identical or in the presence of noise or changes in illumination there should be changes that are within predefined limits. However, if there is an intrusion there should be a large change in the interframe pixel information comprising the grid.
Once an inter frame change that is of a type that is above the threshold set for the scene occurs there is a high probability that an intrusion has take place that necessitates an intervention by a security service provider or the owner. Thus the system could be configured to either set an alarm and/or send the frames that have been identified as having the abnormal changes to the security service provider or the owner for final analysis/decision.
Implementation Viewed from one aspect this invention may be considered to mimic the higher levels of image understanding achieved in human vision within the electronics that are associated with camera to the point of identifying that there is a moving object within the scene and to then use Fuzzy Logic classification or other techniques such as Principle Component Analysis or other non exact mathematical techniques to assign the object to a specific SET with a high degree of probability. Based on the assignment then a predefined action can then be taken. For example a frame (or a number of frames) of the image can be transmitted to another location where a human can make the final classification of the object and then define the appropriate action.
An advantage of an embodiment of this invention is that the preprocessing included in the camera will enable the system to operate in an autonomous mode for most of the time, e.g. 99%, or less preferably 95% or even less preferably 85% of the time the camera is on, only requesting support from the owner! human when a predefined set of conditions have been identified, for example an upright figure has moved into the scene.
It is normal practice today to capture and store video frames in a memory.
Each piece of the digital coded information representing each pixel of each video frame is stored in a separate address within a memory for either display in real time or storage or onward transmission to a remote location. Therefore, information relating to a subset of pixels may be defined by choosing a subset of addresses within the memory that can then be further interrogated. Alternatively information that is obtained from a subset of the image could be stored in a specific memory and relates the real memory address to the location of the pixel in the imager. In this manner a grid of pixels can be defined that are chosen to be monitored from one image frame to another. Within a static environment the information representing each pixel on the grid will remain more or less constant from one video frame to the next. Obviously there will be changes to the scene over a period of time associated with changes in lighting conditions etc. These slow changes can be easily discarded by selecting only those interframe changes that are major or that are above a defined threshold of change.
The number of pixels that represent the grid can be chosen to optimise the trade off between sensitivity of the system to the identification of change (i.e. number of pixels chosen) and computing complexity.
Further the shape of the grid can also be chosen to optimise the sensitivity of the system versus computing complexity. For example the system could have a grid of horizontal lines, vertical lines, points, cross hatches, diagonal line or a grid that is tuned to the scene that the system is monitoring e.g. there are more pixels chosen at a door or a window where an intruder may initially appear.
Further to the grid being defined to be specific to a scene, it could also be weighted so that specific areas of the scene are more sensitive to change than others.
Further the system can emphasise certain locations in the scene such as a door or a window by increasing the weight that is applied to changes that take place from one frame to the next frame in that location.
Furthermore the system could define that changes to the grid in one axis are more sensitive to those that take place in another axis.
Further it is possible for the system to differentiate between the aspect ratio of objects such as between upright objects (e.g. tall! narrow humans) moving through the scene versus horizontal objects (e.g. wide! shallow! small animals) moving through the scene by categorizing the changes in adjacent points on the grid and by weighting the transition of vertical lines higher than horizontal lines representing the grid.
Further if the system has a grid comprising of a set of vertical and horizontal lines covering the scene, it is possible to track the movement of an object through the scene by recognizing the leading and trailing edge of the object as it creates a change in the values of the pixel information from one frame to another.
Further the system could weight the cumulative number of points of intersection of the grid that are changing to represent different intruders. For example if one has an above threshold change in points of the grid that are aligned to the horizontal the system could assume that a guard dog is moving across the scene, whilst if there are a number of above threshold changes that take place on the vertical lines of the grid the system could assume that an upright human is moving through the scene. A static large splodge or blob on the grid might be interpreted that a fly had landed on the camera. Thus the type of interframe changes experienced in the grid can be used to classify the type of intrusion that is being monitored by within the scene.
Further more there will be instances where small dust particles or an insect may float by the lens of the camera, again such small items can be discarded by monitoring the changes from one frame to another and ensuring that a change also takes place at a number of adjacent points on the grid. Thus it is possible to set a threshold as to the size of the object that can trigger an alarm by adjusting the number of adjacent points on the grid where change is necessary.
To identify these changes each of the values representing individual pixels used to represent the grid can be compared to the information that represents the corresponding pixel within the preceding frame to identify if there is a change in the information that is above the predefined threshold.
By then correlating the changes in the values representing pixels that are directly adjacent each other on the grid the system can build a value that is representing the overall change that has taken place within the two consecutive frames. By integrating the change over many frames the system can identify a movement of an object instance through the scene.
In one embodiment of the invention, the system looks for a weighted change over a number adjacent points of the grid, over number of frames to trigger a response, i.e. that there is an incident to report, thus such a system can be implemented in a number of different ways. For example the use of a Fuzzy Logic engine which accumulates weighted changes in pixel data from one frame to another or over multiple frames where the accumulated response to each individual Fuzzy decision will trigger an overall positive or negative or neutral output.
Utilising Fuzzy Logic the system can accumulate the Fuzzy outputs from grid points in a number of consecutive frames to a arrive at a single value which can be compared to a threshold or a set of thresholds to arrive at a yes/no result based on identifying that over a predefined number of frames the data representing a predefined number of adjacent pixels had changed by more than a predefined threshold an intruder can be identified as being present. Further in Fuzzy Logic a change to the weighting associated with each fuzzy decision to change the decision making process over time may be made i.e. the system can be tuned to the environment or can learn from consistent false alarms associated with the system, e.g. if sun shining onto a vase in the afternoon could trigger a erroneous response then the system may be modified to reduce the weight associated with the grid in location of the vase to effectively tune it out of the scene.
An example of a suitable Fuzzy logic engine capable of identifying changes in the grid of selected pixels is outlined below.
Et=i Et'n+l * * Ft=n Dt= Ft=n+1 Dtn+1 Bt=n Bt=n+ Ct'n At=n Ct=n+1 Atn+l 1 1 A grid suitable for imposing over an image is illustrated above. Each square of the grid is labelled A, B, C, D, E, F etc with the nth frame shown on the left hand side of the drawing, and the succeeding (at n+ 1t1) frame shown on the right hand side of the drawing.
A threshold level for each of A, B, C is set based on the image to be monitored.
A threshold level of X is set above which a sum of the output of the Fuzzy set comprising A, B, C defines the presence of a human intruder moving through the scene.
A threshold Y (to be less than X) to set below which the sum of the output of the set A, B, C defines noise. Then a value equal to the sum of the output sets between X and Y can define presence of a non human intruder moving through the scene.
Based on the values representing the pixel information recorded at A, B &C in time t =n, and the subsequent frame t = n+1, create the Fuzzy Logic Set as follows: If At =(n+1) -At =n> threshold A and if Bt =(n+1) -Bt =n> threshold B set output to level 0.7 If At (n+l) -At =n > threshold A and if Ct =(n+1) -Ct n > threshold C set output to level 0.3 If At (n+l) -At =n < threshold A and if Bt =(n+l) -Bt =n < threshold B set output to level 0 If At =(n+1) -At =n <threshold A and if Ct =(n+l) -Ct n < threshold C set output to level 0 Repeat the calculation of the Fuzzy Set for other selected points on the grid for
example D, E, F.
Repeat calculation for the next frame where t = n+2 until the result of N frames and Z points have been calculated.
Sum the output of the Fuzzy Sets after the N frames and define the output to be: If sum of output sets for A to Z points over N frames is greater than X, then sound the alarm or transmit prior frame(s) to an operator for detailed analysis.
If sum of output sets for A to Z points over N frames less than X but greater than Y, then assume that a non human intruder has entered the scene. Optionally, store result of all Fuzzy set calculations for off line analysis for threshold optimisation.
If sum of output sets for A to Z points over N frames is less than Y, no action to be taken. Reset t to zero and rerun Fuzzy set calculations.
An embodiment in accordance with the present invention may be implemented in hardware, firmware, software or any combination of two or more of hardware, firmware or software. For example, the analysis of the Fuzzy Set may be implemented using a microprocessor, ASIC or DSP.
Information regarding the formation of Fuzzy Sets may be found at: http://www.seattlerobotics.org/encoder/mar98/fuz/flindex.html.
An alternative approach would be to identify the changes that have taken place in the data representing a group of pixels located in a similar area of the scene and to use Principle Component Analysis to track the changes as they transition to adjacent pixels to the group that has been identified.
An alternative approach could be to use a conventional mathematical algorithm to calculate the changes from one frame to another at the grid points and to set a hard threshold to trigger an alarm.
The above approaches to tracking interframe changes of the information associated with the grid are not the only ways in which the system could identify abnormal changes in the scene and other methods may be employed. However they represent methods, which may lend themselves to low complexity implementations.

Claims (24)

1. Data processing apparatus for automatically identifying a change in an imaged scene, said apparatus configured to: segment an image of said scene into a plurality of segments each comprising a plurality of image elements; identify a change in intensity of a segment relative to an intensity of said segment in a preceding image of said scene; and determining said intensity exceeding a threshold value.
2. Data processing apparatus according to claim 1, further configured to determine a rate of change of intensity in said segment.
3. Data processing apparatus according to claim 1 or claim 2, further configured to assign a rate value to a segment to increase the sensitivity of said apparatus to intensity changes in that segment.
4. Data processing apparatus according to claim 1, further configured to determine an aspect ratio of a plurality of proximal segments each determined to have a change of intensity exceeding said threshold value.
5. Data processing apparatus according to claim 4, further configured to assign a weight value to a predefined aspect ratio.
6. Data processing apparatus according to any preceding claim, further configured to determine movement of intensity change between proximal segments.
7. Data processing apparatus according to claim 6, further configured to assign a rate to a predefined direction of intensity change.
8. Data processing apparatus according to any preceding claim, further configured to identify an intensity change exceeding a threshold value.
9. Data processing apparatus according to any preceding claims, further configured to identify an intensity change having a value between a first threshold value and a second threshold value.
10. Data processing apparatus according to claim 9, further configured to identify an intensity change having a value in a one of multiple threshold ranges.
11. A surveillance system comprising an image capture element and apparatus according to any one of claims 1 to 10.
12. A method of operating a data processing apparatus to automatically identify a change in an imaged view, the method comprising: segmenting an image of said view into a clarity of segments each comprising a plurality of image elements; identifying a change in intensity of a segment relative to an intensity of said segment in a preceding image of said view; and determining said intensity exceeding a threshold value.
13. A method according to claim 12, further comprising determining an aspect ratio of a plurality of proximal segments each determined to have an intensity exceeding said threshold value.
14. A method according to claim 13, further comprising assigning a weight to a predefined aspect ratio thereby to increase the sensitivity of said method to changes of intensity where a plurality of segments has a predefined aspect ratio.
15. A method according to any one of claims 12 to 14, further comprising determining a rate of change of intensity in said segment.
16. A method according to any one of claims 12 to 15, further comprising assigning a rate value to a segment to increase the sensitivity to intensity changes in that segment.
17. A method according to any one of claims 12 to 16, further comprising determining movement of intensity change between proximal segments.
18. A method according to claim 17, further comprising assigning a rate value to a direction of movement of said intensity change between proximal segments.
19. A method of operating a surveillance system comprising capturing a first image of said scene, capturing a second image a said scene and performing the method according to any one of claims 9 to 18.
20. A method according to any one of claims 12 to 19, further comprising identifying an intensity change exceeding a threshold value.
21. A method according to any one of claims 12 to 20, further comprising identifying an intensity change having a value between a first threshold value and a second threshold value.
22. A method according to claim 21, further comprising identifying an intensity change having a value in a one of multiple threshold ranges.
23. Apparatus substantially as hereinbefore described with reference to the accompanying drawings.
24. A method substantially as hereinbefore described with reference to the accompanying drawings.
GB0504091A 2005-02-28 2005-02-28 Identifying scene changes Withdrawn GB2423661A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB0504091A GB2423661A (en) 2005-02-28 2005-02-28 Identifying scene changes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0504091A GB2423661A (en) 2005-02-28 2005-02-28 Identifying scene changes

Publications (2)

Publication Number Publication Date
GB0504091D0 GB0504091D0 (en) 2005-04-06
GB2423661A true GB2423661A (en) 2006-08-30

Family

ID=34430342

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0504091A Withdrawn GB2423661A (en) 2005-02-28 2005-02-28 Identifying scene changes

Country Status (1)

Country Link
GB (1) GB2423661A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2447133A (en) * 2007-03-02 2008-09-03 Bosch Gmbh Robert Automatic evaluation and monitoring of multiple objects within a scene
FR2929734A1 (en) * 2008-04-03 2009-10-09 St Microelectronics Rousset METHOD AND SYSTEM FOR VIDEOSURVEILLANCE.
EP2122537A2 (en) * 2007-02-08 2009-11-25 Utc Fire&Security Corporation System and method for video-processing algorithm improvement

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5732146A (en) * 1994-04-18 1998-03-24 Matsushita Electric Industrial Co., Ltd. Scene change detecting method for video and movie
WO1998023085A1 (en) * 1996-11-20 1998-05-28 Telexis Corporation Method of processing a video stream
EP0896466A2 (en) * 1997-08-06 1999-02-10 General Instrument Corporation Fade detector for digital video
GB2364608A (en) * 2000-04-11 2002-01-30 Paul Conway Fisher Video motion detector which is insensitive to global change
WO2002069620A1 (en) * 2001-02-28 2002-09-06 Scyron Limited Method of detecting a significant change of scene
WO2003052711A1 (en) * 2001-12-18 2003-06-26 Hantro Products Oy Method and device for identifying motion

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5732146A (en) * 1994-04-18 1998-03-24 Matsushita Electric Industrial Co., Ltd. Scene change detecting method for video and movie
WO1998023085A1 (en) * 1996-11-20 1998-05-28 Telexis Corporation Method of processing a video stream
EP0896466A2 (en) * 1997-08-06 1999-02-10 General Instrument Corporation Fade detector for digital video
GB2364608A (en) * 2000-04-11 2002-01-30 Paul Conway Fisher Video motion detector which is insensitive to global change
WO2002069620A1 (en) * 2001-02-28 2002-09-06 Scyron Limited Method of detecting a significant change of scene
WO2003052711A1 (en) * 2001-12-18 2003-06-26 Hantro Products Oy Method and device for identifying motion

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2122537A2 (en) * 2007-02-08 2009-11-25 Utc Fire&Security Corporation System and method for video-processing algorithm improvement
EP2122537A4 (en) * 2007-02-08 2010-01-20 Utc Fire & Security Corp System and method for video-processing algorithm improvement
GB2447133A (en) * 2007-03-02 2008-09-03 Bosch Gmbh Robert Automatic evaluation and monitoring of multiple objects within a scene
GB2447133B (en) * 2007-03-02 2009-10-21 Bosch Gmbh Robert Apparatus, procedure and computer program for image-supported tracking of monitored objects
US8860815B2 (en) 2007-03-02 2014-10-14 Robert Bosch Gmbh Apparatus, method and computer program for image-based tracking of surveillance objects
FR2929734A1 (en) * 2008-04-03 2009-10-09 St Microelectronics Rousset METHOD AND SYSTEM FOR VIDEOSURVEILLANCE.
US8363106B2 (en) 2008-04-03 2013-01-29 Stmicroelectronics Sa Video surveillance method and system based on average image variance

Also Published As

Publication number Publication date
GB0504091D0 (en) 2005-04-06

Similar Documents

Publication Publication Date Title
US10936655B2 (en) Security video searching systems and associated methods
EP4105101A1 (en) Monitoring system, monitoring method, and monitoring device for railway train
CN109686109B (en) Parking lot safety monitoring management system and method based on artificial intelligence
US10204520B2 (en) Unmanned aerial vehicle based security system
Wheeler et al. Face recognition at a distance system for surveillance applications
KR101085578B1 (en) Video tripwire
KR100905504B1 (en) Video tripwire
CN111770266A (en) Intelligent visual perception system
CN109711318B (en) Multi-face detection and tracking method based on video stream
Kumar et al. Study of robust and intelligent surveillance in visible and multi-modal framework
US20220122360A1 (en) Identification of suspicious individuals during night in public areas using a video brightening network system
US10719717B2 (en) Scan face of video feed
KR102392822B1 (en) Device of object detecting and tracking using day type camera and night type camera and method of detecting and tracking object
CN113723369B (en) Control method, control device, electronic equipment and storage medium
JP2021077350A (en) Method and device for generating object classification for object
US20230334966A1 (en) Intelligent security camera system
CN112232107A (en) Image type smoke detection system and method
US20060114322A1 (en) Wide area surveillance system
KR101485512B1 (en) The sequence processing method of images through hippocampal neual network learning of behavior patterns in case of future crimes
CN201142737Y (en) Front end monitoring apparatus for IP network video monitoring system
GB2423661A (en) Identifying scene changes
KR101814040B1 (en) An integrated surveillance device using 3D depth information focus control
KR102111162B1 (en) Multichannel camera home monitoring system and method to be cmmunicated with blackbox for a car
KR20130047131A (en) Method and system for surveilling contents of surveillance using mobile terminal
JP7002009B2 (en) Monitoring parameter update system, monitoring parameter update method and program

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)