WO2013093771A1 - Monitoring a scene - Google Patents

Monitoring a scene Download PDF

Info

Publication number
WO2013093771A1
WO2013093771A1 PCT/IB2012/057420 IB2012057420W WO2013093771A1 WO 2013093771 A1 WO2013093771 A1 WO 2013093771A1 IB 2012057420 W IB2012057420 W IB 2012057420W WO 2013093771 A1 WO2013093771 A1 WO 2013093771A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
abnormal condition
normal
sensors
outputs
Prior art date
Application number
PCT/IB2012/057420
Other languages
French (fr)
Inventor
Gianluca Monaci
Tommaso Gritti
Harry Broers
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2013093771A1 publication Critical patent/WO2013093771A1/en

Links

Classifications

    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/105Controlling the light source in response to determined parameters
    • H05B47/115Controlling the light source in response to determined parameters by determining the presence or movement of objects or living beings
    • H05B47/125Controlling the light source in response to determined parameters by determining the presence or movement of objects or living beings by using cameras
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Definitions

  • This invention relates to monitoring a scene, for example using a camera.
  • these boundary conditions are defined by the sensor manufacturer, such as the temperature range within which one sensor can operate.
  • the LumiMotion streetlamp module includes a camera sensor used to detect the presence of a person close to the luminaire and turn on the lamp, which is normally dimmed.
  • the LumiMotion detection and tracking algorithm is designed to work under specific low-light conditions typical of the installation.
  • US Patent Application US2008/0265799A1 addresses the problem of providing illumination in a manner that is energy efficient and intelligent.
  • Distributed processing across a network of illuminators is used to control the illumination for a given environment.
  • the network controls the illumination level and pattern in response to light, sound, and motion.
  • the network may also be trained according to uploaded software behavior modules, and subsets of the network may be organized onto groups for illumination control and maintenance reporting.
  • a first aspect of the invention provides a method for controlling a lighting system, the method comprising:
  • detecting an abnormal condition comprises:
  • Detecting motion may comprise monitoring changes in sensor output over a first period of time
  • detecting an abnormal condition may comprise monitoring changes in sensor output over a second period of time, wherein the second period of time is longer than that first period of time.
  • Determining that an abnormal condition exists when a significant change in the normal scene is detected may comprise calculating a Chi-square histogram difference.
  • Learning a normal scene may involve slowly evolving a stored base histogram for plural pixel elements over consecutive images. This evolution may involve calculating the weighted moving average of the histograms. Each histogram comprises an occurrence, or count, value for each of plural bins of the histogram. The bins may be of intensity or range of intensities.
  • the outputs of the one or more sensors may be spatially distinct and determining a significant change in the normal scene may comprise determining a number of elements of outputs of the one or more sensors that deviate from the normal by a threshold amount.
  • Spatially distinct sensor outputs may relate to camera outputs, or other sensors where different parts of the scene are observed separately from one another. This spatial distinction may be by way of abutting regions, such as found with a camera, or there may be gaps between adjacent areas or adjacent areas may overlap to some extent.
  • Determining a significant change in the normal scene may comprise comparing the number of elements of outputs of the one or more sensors that deviate from the normal by a threshold amount to a second threshold.
  • Processing outputs of the one or more sensors to monitor the scene may comprise monitoring plural elements of outputs of the one or more sensors each comprising two or more adjacent pixels of the one or more sensors.
  • the sensor may be one of a range of different devices, such as an optical camera, a microphone, a time-of- flight camera, or a Passive Infrared (PIR) camera.
  • PIR Passive Infrared
  • Another aspect of the invention provides a computer program comprising machine readable instructions that when executed by computing apparatus control it to perform the method of any preceding claim.
  • a third aspect of the invention provides apparatus for controlling a lighting system, the apparatus comprising:
  • a processor for processing outputs of one or more sensors to monitor a scene observed by the sensors
  • a first trigger responsive to detecting motion in the scene by triggering a first action comprising switching on a lamp
  • an abnormal condition detector arranged to:
  • a second trigger responsive to detecting an abnormal condition in the scene by triggering a second action
  • the outputs of the one or more sensors may be spatially distinct and the abnormal condition detector may be configured to determine a significant change in the normal scene by determining a number of elements that deviate from the normal by a threshold amount.
  • a fourth aspect of the invention provides apparatus comprising one or more processors, one or more memories and computer code stored in the one or more memories, the computer code being configured to control the one or more processors to perform a method of controlling a lighting system comprising:
  • triggering a first action comprising switching on a lamp
  • detecting an abnormal condition comprises:
  • Figure 1 is a schematic diagram of a system according to aspects of the invention.
  • Figure 2 is a flow chart showing operation of the system of Figure 1 according to aspects of the invention
  • Figure 3a to 3d are schematic figures illustrating camera outputs in different operating conditions.
  • Figure 4 is a flow chart detailing options for part of the flow chart of Figure 2.
  • a streetlamp system 100 has a control module 112.
  • the control module 112 comprises a processor 103 and a memory 104.
  • the processor 103 and the memory 104 are connected to other components of the streetlamp system 100 by an interface 102. These components include at least one camera 101, a lamp system controller 108 and an RF transceiver 110.
  • a power supply 107 is connected to power the sensing module 112 and the lamp system controller 108.
  • the lamp system controller 108 is connected to a lamp 109.
  • the control module 112 is able to control switching of the lamp 109 on and off.
  • the lamp 109 may have a power output of tens or hundreds of Watts, thereby providing street lighting.
  • the camera 101 here is sensitive to light in visible and infra red parts of the electromagnetic spectrum.
  • the camera 101 may be provided at an uppermost part of the streetlamp system 100 and be directed downwards. In this way, the camera is configured to monitor an area beneath the streetlamp system. This area includes the area that is illuminated by the lamp 109 and may also include areas outside of the illuminated area.
  • the camera 101 may be provided with a fish eye lens. In this way, the camera 101 is provided with a large field of view.
  • the system has spatial resolution, because of characteristics of the camera 101.
  • the memory 104 may be a non- volatile memory such as read only memory (ROM) a hard disk drive (HDD) or a solid state drive (SSD).
  • the memory 104 stores, amongst other things, an operating system 105 and one or more software applications 106.
  • the memory 104 is used for the temporary storage of data as well as permanent storage. Alternatively, there may be separate memories for temporary and non-temporary storage.
  • the software application 106 contains computer code which, when executed by the processor 103 in conjunction with the memory 104, learns a scene, detects motion and detects abnormal conditions.
  • the software application 106 contains computer code which, when executed by the processor 103, also controls operation of the camera 101, the lamp system controller 108 and the RF transceiver 110.
  • the RF transceiver 110 allows communication between the streetlamp system 100 and a control centre 1 11.
  • the system 100 is operational, and the method is performed, only at night-time.
  • the streetlamp system 100 is controlled to be operational in darkness so as to provide illumination when needed.
  • Night-time can be detected through the use of ambient light sensing either through use of a separate sensor (not shown) or through the camera 101.
  • night-time can be detected through the use of a clock and knowledge of sunset and sunrise times.
  • the method starts with the camera 101 capturing a first image of a scene in step S 1.
  • the feature extraction procedure in step S2 entails dividing the image into blocks of pixels, and computing histograms of grey-level values for each block, and storing the histograms in the memory 104.
  • a normal model is initialized. This may involve processing the first image using an algorithm, as is explained in more detail below.
  • step S3 may involve reading a normal model stored in the memory 104, for instance from a factory setting or from previous operation of the streetlamp system 100.
  • the normal model is initialized for each block of pixels.
  • Each block of pixels may be termed an element.
  • step S4 the imaging sensor 101 is controlled to capture a second image, and its features extracted in step S5.
  • this second image is used to determine whether motion is present in the scene.
  • Step S6 involves comparing the image captured in step S4 with the immediately preceding image.
  • the immediately preceding image is the image captured at step SI .
  • the immediately preceding image is the image that was captured on the preceding execution of step S4.
  • Motion detection may be performed in any suitable manner and is not discussed in detail here. If motion is detected in step S6, the method proceeds to step S7.
  • step S7 the device 102 is configured to act upon motion detection.
  • step S7 may involve activating the lamp 109, if it is not already activated. If it is already activated, it may involve maintaining the lamp 109 activated. Step S7 may also involve storing a record of a time at which the lamp 109 was activated. If the activation of the lamp 105 is timer based, in that the lamp remains activated for a set time period after motion is last detected, step S7 may involve resetting the timer. After step S7, the method proceeds to step S8.
  • step S8 the normal model is updated and the updated normal model is stored in the memory 105.
  • step S9 abnormal condition detection is performed to determine if an abnormal condition is present. If no abnormal condition is detected at step S9 then the method returns to step S4. If an abnormal condition is detected at step S9, an alarm or fallback mode is entered at step S10. Step S10 may involve the triggering of an alarm. The alarm may be generated locally at the streetlamp system 100 or may be communicated to the control centre 111. Step S10 may alternatively or in addition involve triggering fallback actions. A fallback action may be illumination of the lamp 109. Step S10 also inhibits the updating of the normal in step S8. Step S10 may involve undoing any updates to the normal that have been provided in a period of time prior to step S10 being performed. After step S10, the method returns to step S4.
  • Step S4 is performed periodically. Intervals between successive performances of step S4 has an effect on a number of things, including the amount of processing required and the ability of the system to detect motion. In these embodiments, step S4 is performed at 200 ms intervals, so 5 times per second, although alternative intervals also are envisaged.
  • Step S9 can take any suitable form. Before describing some suitable forms for this step, we will explain what the camera observes in different conditions with reference to Figures 3a to 3d.
  • the scene observed by the camera 101 may be defined as one of 'normal scene', 'normal motion' or 'abnormal condition'.
  • the control module 102 is configured to differentiate between the possible scenes in steps S6 and S9.
  • a grey-scale histogram is populated as follows.
  • the total intensity range captured by the camera 101 (0-255 for an 8 bit camera) is divided into a number of bins N.
  • the captured intensity is quantized to get the index of the
  • intensityBinlndex pixellntensity/N+l (1) and the corresponding value in the bin number intensityBinlndex is
  • Figures 3a-3d show grey-scale histogram representations of camera outputs at different pixel blocks at different moments in time. From these, it is possible to identify the differences between these possible types of camera output.
  • plural intensity bins are shown along the horizontal axis from lower intensity on the left side to the higher intensity on the right side. Each intensity bin may relate to one specific intensity or to a small range of intensities.
  • the vertical axis indicates the number of pixels in the block with the corresponding intensity.
  • the number of histogram bins and the block resolution may vary. In this example, there are 7 bins.
  • Figures 3 a and 3 c show a first image 300 from the camera 101 at two different points in time, and the grey-scale histograms associated with three separate pixel blocks.
  • Figures 3b and 3d show grey-scale histograms of the same pixel blocks in a second image from the camera 101 a short time later.
  • Figure 3a depicts a 'normal scene' with the histograms associated with pixel blocks first to third pixel blocks 301, 302, and 303.
  • Figure 3b illustrates an evolution of the scene in Figure 3a whereby an abnormal condition is present in the second image, although absent from the first image shown in Figure 3a. This abnormal condition is highlighted by the significantly different histograms for the
  • Figure 3 c depicts a black car in the first pixel block 307 superimposed onto the normal scene of Figure 3a pixel 301.
  • Figure 3d shows that the black car has moved across the scene to the third pixel block 312 (309 in Figure 3 c) at the time at which the second image was taken.
  • the second pixel block 311 remains substantially unchanged compared to the corresponding pixel block, 308, in Figure 3c.
  • the first pixel block 310 now shows substantially the same scene as in Figure 3a pixel 301
  • the third pixel block 312 shows a similar grey-scale histogram as the first pixel block 307 in Figure 3c.
  • control module 112 in operation analyses all pixel blocks continually. This includes monitoring for motion detection, and for abnormal condition detection.
  • the normal model is initialized with the grey-level histograms of the pixel blocks of the first captured image.
  • Learning is then achieved at step S8 by updating.
  • the normal model is updated using the histograms for the most recent image. This update step can be done in several ways. In these embodiments, learning is achieved by updating the histogram for each pixel block using an exponentially weighted moving average rule.
  • the normal model histogram let us call it NormHist, is updated using the histogram computed for the current image, let us call it Hist t , according to the rule:
  • NormHistt (l-a)*NormHist t _i + a* Hist t (2) where a is a constant.
  • the value of a determines how quickly the normal models adapts to new observations. In these embodiments, the value of a is small, so the normal model adapts slowly.
  • the value of a may for instance be 0.001. It may take a value in the range 0.0001 to 0.01, more preferably in the range 0.0005 to 0.005.
  • the method of normal model learning may alternatively include Neural Networks, Support Vector Machines, and clustering.
  • step S9 may be as will now be described with reference to Figure 4.
  • step SI the value of each bin in the grey-scale intensity histogram is stored for each pixel block.
  • each pixel block is represented by plural values, one for each intensity bin. This is represented graphically by the grey-scale histograms of Figures 3a-3d.
  • step S2 the values for a given intensity bin and pixel block are then summed for all images captured in a rolling window.
  • the window may have a width of 30 seconds, for instance.
  • the average value is then calculated in step S3 by dividing the values by the number of images in the window. This is performed for each intensity bin, to provide an average histogram for a pixel block.
  • step S4 a measure of the difference between this average value for each intensity bin in each pixel block and the corresponding value for an intensity bin in a pixel block of the normal model at the current time.
  • the measure of the difference may be a simple numerical difference, calculated by subtraction.
  • a measure of the difference for a pixel block is obtained by summing the differences for all the intensity bins in that pixel block. This difference may be termed histogram distance.
  • An alternative, and advantageous, method for calculating histogram distance includes the Chi-square technique. Other suitable techniques will be known to the skilled person.
  • step S5 it is determined whether the change represents the presence of an abnormal condition.
  • a significant change in a pixel block is determined if the calculated difference from the block to the normal exceeds a threshold.
  • An abnormal condition is determined if a predetermined proportion, e.g. 60%, or more pixel blocks show overall change (positive or negative) above the threshold.
  • any motion on the normal model is low. Any 'normal motion', whereby an object only remains in a particular pixel for a short period of time, e.g. a bird flying closely in front of the camera, does not contribute significantly to the integrated value per intensity bin per pixel block.
  • thresholds in determining whether a pixel block has changed significantly from the normal model and in determining whether a deviation from normal has occurred for a significant proportion of pixel blocks means that 'normal motion' does not cause a (false) detection of an abnormal condition.
  • the above-described method initializes the normal model at step S3 of Figure 2. This method thus provides the best performance when conditions are normal when this step is performed. If, however, the conditions at this time are not normal, the normal model will be learned correctly over time, in particular by repeated execution of step S8 over a period of time.
  • a streetlamp module includes a camera sensor used to detect the presence of a person close to the luminaire and turn on the lamp, which is normally dimmed.
  • a detection and tracking algorithm is configured to function under specific low-light conditions typical of the installation. If the sensing conditions deviate significantly from the normal conditions, the detection algorithm will provide unreliable output and consequently the lighting system will exhibit an unpredictable, dangerous, and certainly undesirable, behavior. For example, if there is a thick fog, nothing can be detected below the fog curtain, and the light would most likely be off all the time.
  • the embodiments described above provide a lighting system with a sensor and processing unit which is capable of learning its normal operating conditions and detecting deviations from the normal model to trigger a fallback operating mode or an alarm signal. Moreover, this is achieved without requiring the system to be programmed to deal with specific abnormal conditions. This provides overall improved operation in the sense that abnormal conditions are more reliably detected, preventing incorrect operation of the streetlamp system in numerous, diverse situations.
  • abnormal conditions are classified by the system 100 into different broad categories depending on the severity and type of deviation from the normal model.
  • different fallback modes and signals are triggered by the system 100, depending on the detected condition.
  • the streetlamp system 100 is connected to a network and control centre 111 to which other streetlamp systems also are connected.
  • distributed network information can be used to make more accurate and fast decisions. For example, if several streetlamp systems 100 seem to detect a heavy fog abnormal condition, it is perhaps likely that there is actually fog and the sensing module 112 can then trigger an abnormal operating condition detection without waiting the usual time (30 seconds in the above).
  • recording of video can be triggered by the system 100, to allow inspection at a later stage.
  • the recorded video could be sent to the control station 111 so as to inform relevant personnel or to allow manual validation of the abnormal condition.
  • grey-scale histograms are used, alternatives are envisaged.
  • histograms of oriented edges are used in place of the grey-level histograms described above.
  • wavelet histograms are used in place of the grey-level histograms described above.
  • histograms of Local Binary Patterns are used in place of the grey-level histograms described above.
  • a camera 101 is used to monitor the scene, it will be appreciated that other sensors that provide spatial information may instead be used.
  • the camera 101 may be replaced by a spatial ultrasound sensor arrangement, a time of flight camera, RADAR, or a thermopile array.
  • data from each of plural spatially distinct elements is processed as detailed above.
  • the blocks of pixels allocated by feature extraction in step S2 may have special rules associated with them. For example, a scene may be divided into areas such as 'pavement', 'road', 'bus stop', and 'building' based on the amount of motion detected over a set period of time.
  • the motion detection algorithm does not process pixel blocks associated with the building as no relevant motion will ever be detected there.
  • the abnormal condition detection algorithm is configured not to consider pixel blocks associated with the bus stop in detecting an abnormal condition, for the reason that buses may stop there for relatively long periods and this is not indicative of an abnormal condition.
  • pixel blocks may overlap. Alternatively there could be voids between the areas of sensitivity of adjacent pixel blocks.
  • the fallback mode described in step S10 may take several forms.
  • the fallback mode may be keeping the lamp on continuously.
  • the alarm triggered may be an audible alarm which is part of the system itself, so that the public are aware of a dangerous condition, or the alarm may be an audio or visual alarm in a central control centre which makes only an operator aware of an abnormal condition.
  • the normal is updated every time that a new image is captured, it will be appreciated that the normal may instead be updated less frequently.
  • the camera 101 is replaced by a non-spatially aware sensor (not shown).
  • suitable sensors are passive microphones and passive infrared (PIR) sensors.
  • PIR passive infrared
  • step SI entails capturing a first acoustic profile of the scene to determine background noise level.
  • Step S2 involves extracting representative signal features from the profile.
  • these features may be signal amplitude ("loudness"). Alternatively, they may be signal variance and dynamic range, signal pitch period and bandwidth or Mel- frequency cepstral coefficients. Histograms of the extracted features are concatenated or combined (e.g. weighted average). Histograms of such features are used as feature vectors to represent the scene content.
  • the feature vectors are stored in memory 104.
  • a normal model is initialized. This may involve processing the first profile using an algorithm, as is explained in more detail below. Alternatively, step S3 may involve reading a normal model stored in the memory 104, for instance from a factory setting or from previous operation of the streetlamp system 100. The normal model is initialized for the scene as a whole.
  • step S4 the microphone 101 is controlled to capture a second acoustic profile, and its required features extracted in step S5, as described for S2.
  • this second profile is used to determine whether motion is present in the scene.
  • Step S6 involves comparing the feature vector of the profile captured in step S4 with the features of the immediately preceding profile. In the first execution of step S6, the immediately preceding profile is the profile captured at step SI . In subsequent executions of step S6, the
  • step S6 immediately preceding profile is the profile that was captured on the preceding execution of step S4.
  • Motion may be determined if the feature vectors change by some threshold level e.g. a detected sound source increases or decreases in amplitude. Alternatively a particular redshift or blueshift in the source frequency may represent motion be detected. If motion is detected in step S6, the method proceeds to step S7.
  • step S7 the device 102 is configured to act upon motion detection.
  • step S7 may involve activating the lamp 109, if it is not already activated. If it is already activated, it may involve maintaining the lamp 109 activated. Step S7 may also involve storing a record of a time at which the lamp 109 was activated. If the activation of the lamp 105 is timer based, in that the lamp remains activated for a set time period after motion is last detected, step S7 may involve resetting the timer. After step S7, the method proceeds to step S8.
  • step S8 the normal model is updated and the updated normal model is stored in the memory 105.
  • step S9 abnormal condition detection is performed to determine if an abnormal condition is present. If no abnormal condition is detected at step S9 then the method returns to step S4. If an abnormal condition is detected at step S9, an alarm or fallback mode is entered at step S10.
  • Step S10 may involve the triggering of an alarm. The alarm may be generated locally at the streetlamp system 100 or may be communicated to the control centre 111. Step S10 may alternatively or in addition involve triggering fallback actions. A fallback action may be illumination of the lamp 109. Step S10 also inhibits the updating of the normal in step S8. Step S10 may involve undoing any updates to the normal that have been provided in a period of time prior to step S10 being performed. After step S10, the method returns to step S4. Step S4 is performed periodically.
  • step S4 Intervals between successive performances of step S4 impacts a number of factors, including the amount of processing required and the ability of the system to detect motion.
  • step S4 is performed at 200 ms intervals, so 5 times per second, in order to generate N samples, although alternative intervals also are envisaged.
  • the normal model is initialized with the feature vector computed for the first captured N samples.
  • the learning is then achieved by updating the normal model online, e.g. every time a new sample set is captured, using the new feature vector computed for the most recent profile.
  • This update step can be done in several ways. Different embodiments use clustering, neural networks, Support Vector Machines and other suitable update techniques.
  • NormFeatureVectt (1-a)* NormFeatureVect t _i + a*FeatureVect t (3) where a is a constant.
  • the value of a determines how quickly the normal models adapts to new observations. In these embodiments, the value of a is small, so the normal model adapts slowly.
  • the value of a may for instance be 0.001. It may take a value in the range 0.0001 to 0.01, more preferably in the range 0.0005 to 0.005.
  • the value of a determines how quickly the normal model adapts to new observations: if it is equal to 0 there is no learning, if it is 1 the normal model is equal to the most recent observation. Values in between are those typically used, and a slowly- learning a (e.g. 0.001) is recommended for this application.
  • step S9 The processing provided by step S9 may be as will now be described.
  • the abnormal condition detection module continuously compares the feature vector computed for the current dataset of N samples. In this example, the Chi-square distance between feature vectors is used to determine histogram distance. The module estimates how different the current and normal feature vectors are. If the difference from the normal model is significantly large (e.g. larger than a predefined or learned threshold) for a long time (e.g. 30 seconds), then the system 100 determines the presence of an abnormal condition.
  • a predefined or learned threshold e.g. 30 seconds
  • thresholds in determining whether a detected scene has changed significantly from the normal model and in determining whether a deviation from normal has occurred means that 'normal motion' does not cause a (false) detection of an abnormal condition.
  • the above-described method initializes the normal model at step S3 of Figure 2. This method thus provides the best performance when conditions are normal when this step is performed. If, however, the conditions at this time are not normal, the normal model will be learned correctly over time, in particular by repeated execution of step S8 over a period of time.
  • a streetlamp module could be provided with a microphone sensor and use it to detect the presence of a person close to the luminaire and turn on the lamp.
  • a detection algorithm is configured to function under specific background level conditions typical of the installation. If the sensing conditions deviate significantly from the normal conditions, the detection algorithm might normally provide unreliable output and consequently the lighting system could exhibit undesirable behavior. Such might occur if for example a car alarm is activated in proximity to the streetlamp.
  • the streetlamp module would identify the presence of the car alarm as abnormal operating conditions and then enter a fallback or alarm mode. Other abnormal conditions can also be detected by the system, triggering entering of the fallback or alarm mode.
  • the RF transceiver 110 is one example of a communication module, and may be replaced with an optical or microwave communications module or a wired connection to a network, for instance.

Landscapes

  • Circuit Arrangement For Electric Light Sources In General (AREA)

Abstract

A camera is used to view a scene beneath a streetlamp system. Blocks of pixels are processed. Operation of a lamp depends on motion being detected by processing of images captured by the camera. Grey-scale histograms of the pixel blocks are calculated. A normal condition is learned from the histograms. An abnormal condition, such as fog or smoke, is detected when the histograms for a significant proportion of the pixel blocks deviate from the normal by a significant amount. When an abnormal condition is detected, an alarm is raised or a fallback mode is entered.

Description

Monitoring a scene
FIELD OF THE INVENTION
This invention relates to monitoring a scene, for example using a camera.
BACKGROUND OF THE INVENTION
Intelligence of lighting systems is steadily increasing to address new efficiency and simplicity requirements. For example, solutions exist which adopt presence detection sensors that automatically turn on/off lights for energy savings purposes.
Many types of the sensors included into smart luminaires or lighting systems have preferred operating conditions. Typically, these boundary conditions are defined by the sensor manufacturer, such as the temperature range within which one sensor can operate.
Philips produces the LumiMotion smart streetlamp. The LumiMotion streetlamp module includes a camera sensor used to detect the presence of a person close to the luminaire and turn on the lamp, which is normally dimmed. The LumiMotion detection and tracking algorithm is designed to work under specific low-light conditions typical of the installation.
US Patent Application US2008/0265799A1 addresses the problem of providing illumination in a manner that is energy efficient and intelligent. Distributed processing across a network of illuminators is used to control the illumination for a given environment. The network controls the illumination level and pattern in response to light, sound, and motion. The network may also be trained according to uploaded software behavior modules, and subsets of the network may be organized onto groups for illumination control and maintenance reporting.
SUMMARY OF THE INVENTION
A first aspect of the invention provides a method for controlling a lighting system, the method comprising:
processing outputs of one or more sensors to monitor a scene observed by the sensors; in response to detecting motion in the scene, triggering a first action comprising switching on a lamp; and
in response to detecting an abnormal condition in the scene, triggering a second action,
wherein detecting an abnormal condition comprises:
learning a normal scene; and
determining that an abnormal condition exists when a significant change in the normal scene is detected.
Detecting motion may comprise monitoring changes in sensor output over a first period of time, and detecting an abnormal condition may comprise monitoring changes in sensor output over a second period of time, wherein the second period of time is longer than that first period of time.
Determining that an abnormal condition exists when a significant change in the normal scene is detected may comprise calculating a Chi-square histogram difference.
Learning a normal scene may involve slowly evolving a stored base histogram for plural pixel elements over consecutive images. This evolution may involve calculating the weighted moving average of the histograms. Each histogram comprises an occurrence, or count, value for each of plural bins of the histogram. The bins may be of intensity or range of intensities.
The outputs of the one or more sensors may be spatially distinct and determining a significant change in the normal scene may comprise determining a number of elements of outputs of the one or more sensors that deviate from the normal by a threshold amount.
Spatially distinct sensor outputs may relate to camera outputs, or other sensors where different parts of the scene are observed separately from one another. This spatial distinction may be by way of abutting regions, such as found with a camera, or there may be gaps between adjacent areas or adjacent areas may overlap to some extent.
Determining a significant change in the normal scene may comprise comparing the number of elements of outputs of the one or more sensors that deviate from the normal by a threshold amount to a second threshold.
Processing outputs of the one or more sensors to monitor the scene may comprise monitoring plural elements of outputs of the one or more sensors each comprising two or more adjacent pixels of the one or more sensors. The sensor may be one of a range of different devices, such as an optical camera, a microphone, a time-of- flight camera, or a Passive Infrared (PIR) camera.
Another aspect of the invention provides a computer program comprising machine readable instructions that when executed by computing apparatus control it to perform the method of any preceding claim.
A third aspect of the invention provides apparatus for controlling a lighting system, the apparatus comprising:
a processor for processing outputs of one or more sensors to monitor a scene observed by the sensors;
a first trigger responsive to detecting motion in the scene by triggering a first action comprising switching on a lamp;
an abnormal condition detector arranged to:
learn a normal scene; and
determine that an abnormal condition exists when a significant change in the normal scene is detected; and
a second trigger responsive to detecting an abnormal condition in the scene by triggering a second action,
The outputs of the one or more sensors may be spatially distinct and the abnormal condition detector may be configured to determine a significant change in the normal scene by determining a number of elements that deviate from the normal by a threshold amount.
A fourth aspect of the invention provides apparatus comprising one or more processors, one or more memories and computer code stored in the one or more memories, the computer code being configured to control the one or more processors to perform a method of controlling a lighting system comprising:
processing outputs of one or more sensors to monitor a scene observed by the sensors;
in response to detecting motion in the scene, triggering a first action comprising switching on a lamp; and
in response to detecting an abnormal condition in the scene, triggering a second action,
wherein detecting an abnormal condition comprises:
learning a normal scene; and determining that an abnormal condition exists when a significant change in the normal scene is detected.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the present invention will now be described, by way of example only, with reference to the accompanying drawings of which:
Figure 1 is a schematic diagram of a system according to aspects of the invention;
Figure 2 is a flow chart showing operation of the system of Figure 1 according to aspects of the invention;
Figure 3a to 3d are schematic figures illustrating camera outputs in different operating conditions; and
Figure 4 is a flow chart detailing options for part of the flow chart of Figure 2.
DETAILED DESCRIPTION
A streetlamp system 100 has a control module 112. The control module 112 comprises a processor 103 and a memory 104. The processor 103 and the memory 104 are connected to other components of the streetlamp system 100 by an interface 102. These components include at least one camera 101, a lamp system controller 108 and an RF transceiver 110. A power supply 107 is connected to power the sensing module 112 and the lamp system controller 108. The lamp system controller 108 is connected to a lamp 109. As such, the control module 112 is able to control switching of the lamp 109 on and off. The lamp 109 may have a power output of tens or hundreds of Watts, thereby providing street lighting.
The camera 101 here is sensitive to light in visible and infra red parts of the electromagnetic spectrum. The camera 101 may be provided at an uppermost part of the streetlamp system 100 and be directed downwards. In this way, the camera is configured to monitor an area beneath the streetlamp system. This area includes the area that is illuminated by the lamp 109 and may also include areas outside of the illuminated area. The camera 101 may be provided with a fish eye lens. In this way, the camera 101 is provided with a large field of view. The system has spatial resolution, because of characteristics of the camera 101.
The memory 104 may be a non- volatile memory such as read only memory (ROM) a hard disk drive (HDD) or a solid state drive (SSD). The memory 104 stores, amongst other things, an operating system 105 and one or more software applications 106. The memory 104 is used for the temporary storage of data as well as permanent storage. Alternatively, there may be separate memories for temporary and non-temporary storage. The software application 106 contains computer code which, when executed by the processor 103 in conjunction with the memory 104, learns a scene, detects motion and detects abnormal conditions. The software application 106 contains computer code which, when executed by the processor 103, also controls operation of the camera 101, the lamp system controller 108 and the RF transceiver 110. The RF transceiver 110 allows communication between the streetlamp system 100 and a control centre 1 11.
The method of operation of the system of Figure 1 will now be described with reference to Figure 2. The system 100 is operational, and the method is performed, only at night-time. The streetlamp system 100 is controlled to be operational in darkness so as to provide illumination when needed. Night-time can be detected through the use of ambient light sensing either through use of a separate sensor (not shown) or through the camera 101. Alternatively, night-time can be detected through the use of a clock and knowledge of sunset and sunrise times.
The method starts with the camera 101 capturing a first image of a scene in step S 1. The feature extraction procedure in step S2 entails dividing the image into blocks of pixels, and computing histograms of grey-level values for each block, and storing the histograms in the memory 104. At step S3, a normal model is initialized. This may involve processing the first image using an algorithm, as is explained in more detail below.
Alternatively, step S3 may involve reading a normal model stored in the memory 104, for instance from a factory setting or from previous operation of the streetlamp system 100. The normal model is initialized for each block of pixels. Each block of pixels may be termed an element.
At step S4, the imaging sensor 101 is controlled to capture a second image, and its features extracted in step S5. At step S6, this second image is used to determine whether motion is present in the scene. Step S6 involves comparing the image captured in step S4 with the immediately preceding image. In the first execution of step S6, the immediately preceding image is the image captured at step SI . In subsequent executions of step S6, the immediately preceding image is the image that was captured on the preceding execution of step S4. Motion detection may be performed in any suitable manner and is not discussed in detail here. If motion is detected in step S6, the method proceeds to step S7.
At step S7, the device 102 is configured to act upon motion detection. For example, step S7 may involve activating the lamp 109, if it is not already activated. If it is already activated, it may involve maintaining the lamp 109 activated. Step S7 may also involve storing a record of a time at which the lamp 109 was activated. If the activation of the lamp 105 is timer based, in that the lamp remains activated for a set time period after motion is last detected, step S7 may involve resetting the timer. After step S7, the method proceeds to step S8.
If motion is not detected at step S6 or following step S7, at step S8 the normal model is updated and the updated normal model is stored in the memory 105.
At step S9, abnormal condition detection is performed to determine if an abnormal condition is present. If no abnormal condition is detected at step S9 then the method returns to step S4. If an abnormal condition is detected at step S9, an alarm or fallback mode is entered at step S10. Step S10 may involve the triggering of an alarm. The alarm may be generated locally at the streetlamp system 100 or may be communicated to the control centre 111. Step S10 may alternatively or in addition involve triggering fallback actions. A fallback action may be illumination of the lamp 109. Step S10 also inhibits the updating of the normal in step S8. Step S10 may involve undoing any updates to the normal that have been provided in a period of time prior to step S10 being performed. After step S10, the method returns to step S4.
Step S4 is performed periodically. Intervals between successive performances of step S4 has an effect on a number of things, including the amount of processing required and the ability of the system to detect motion. In these embodiments, step S4 is performed at 200 ms intervals, so 5 times per second, although alternative intervals also are envisaged.
Step S9 can take any suitable form. Before describing some suitable forms for this step, we will explain what the camera observes in different conditions with reference to Figures 3a to 3d.
The scene observed by the camera 101 , and thus its grey-scale histogram output, may be defined as one of 'normal scene', 'normal motion' or 'abnormal condition'. The control module 102 is configured to differentiate between the possible scenes in steps S6 and S9.
A grey-scale histogram is populated as follows. The total intensity range captured by the camera 101 (0-255 for an 8 bit camera) is divided into a number of bins N. For every pixel block, the captured intensity is quantized to get the index of the
corresponding bin as follows:
intensityBinlndex = pixellntensity/N+l (1) and the corresponding value in the bin number intensityBinlndex is
incremented by one.
Figures 3a-3d show grey-scale histogram representations of camera outputs at different pixel blocks at different moments in time. From these, it is possible to identify the differences between these possible types of camera output. In each histogram, plural intensity bins are shown along the horizontal axis from lower intensity on the left side to the higher intensity on the right side. Each intensity bin may relate to one specific intensity or to a small range of intensities. The vertical axis indicates the number of pixels in the block with the corresponding intensity. The number of histogram bins and the block resolution may vary. In this example, there are 7 bins.
Specifically, Figures 3 a and 3 c show a first image 300 from the camera 101 at two different points in time, and the grey-scale histograms associated with three separate pixel blocks. Figures 3b and 3d show grey-scale histograms of the same pixel blocks in a second image from the camera 101 a short time later. Figure 3a depicts a 'normal scene' with the histograms associated with pixel blocks first to third pixel blocks 301, 302, and 303. Figure 3b illustrates an evolution of the scene in Figure 3a whereby an abnormal condition is present in the second image, although absent from the first image shown in Figure 3a. This abnormal condition is highlighted by the significantly different histograms for the
corresponding pixel blocks of pixel blocks 301 , 302, and 303.
Figure 3 c depicts a black car in the first pixel block 307 superimposed onto the normal scene of Figure 3a pixel 301. Figure 3d shows that the black car has moved across the scene to the third pixel block 312 (309 in Figure 3 c) at the time at which the second image was taken. In Figure 3d, the second pixel block 311 remains substantially unchanged compared to the corresponding pixel block, 308, in Figure 3c. In Figure 3d, the first pixel block 310 now shows substantially the same scene as in Figure 3a pixel 301, and the third pixel block 312 shows a similar grey-scale histogram as the first pixel block 307 in Figure 3c.
Note that these schematics are for purposes of illustration only, and the control module 112 in operation analyses all pixel blocks continually. This includes monitoring for motion detection, and for abnormal condition detection.
The initialization of the normal at step S3 and the updating of the normal at step S8 will now be described.
At step S3, the normal model is initialized with the grey-level histograms of the pixel blocks of the first captured image. Learning is then achieved at step S8 by updating. In these embodiments, every time an image is captured, the normal model is updated using the histograms for the most recent image. This update step can be done in several ways. In these embodiments, learning is achieved by updating the histogram for each pixel block using an exponentially weighted moving average rule. At time t, for each pixel block the normal model histogram, let us call it NormHist, is updated using the histogram computed for the current image, let us call it Histt, according to the rule:
NormHistt = (l-a)*NormHistt_i + a* Histt (2) where a is a constant.
The value of a determines how quickly the normal models adapts to new observations. In these embodiments, the value of a is small, so the normal model adapts slowly. The value of a may for instance be 0.001. It may take a value in the range 0.0001 to 0.01, more preferably in the range 0.0005 to 0.005.
The method of normal model learning may alternatively include Neural Networks, Support Vector Machines, and clustering.
The processing provided by step S9 may be as will now be described with reference to Figure 4.
In step SI, the value of each bin in the grey-scale intensity histogram is stored for each pixel block. The result is that each pixel block is represented by plural values, one for each intensity bin. This is represented graphically by the grey-scale histograms of Figures 3a-3d.
In step S2, the values for a given intensity bin and pixel block are then summed for all images captured in a rolling window. The window may have a width of 30 seconds, for instance. The average value is then calculated in step S3 by dividing the values by the number of images in the window. This is performed for each intensity bin, to provide an average histogram for a pixel block.
In step S4, a measure of the difference between this average value for each intensity bin in each pixel block and the corresponding value for an intensity bin in a pixel block of the normal model at the current time. The measure of the difference may be a simple numerical difference, calculated by subtraction. A measure of the difference for a pixel block is obtained by summing the differences for all the intensity bins in that pixel block. This difference may be termed histogram distance. An alternative, and advantageous, method for calculating histogram distance includes the Chi-square technique. Other suitable techniques will be known to the skilled person. In step S5, it is determined whether the change represents the presence of an abnormal condition. A significant change in a pixel block is determined if the calculated difference from the block to the normal exceeds a threshold. An abnormal condition is determined if a predetermined proportion, e.g. 60%, or more pixel blocks show overall change (positive or negative) above the threshold.
Since the normal model evolves relatively slowly, the effect of any motion on the normal model is low. Any 'normal motion', whereby an object only remains in a particular pixel for a short period of time, e.g. a bird flying closely in front of the camera, does not contribute significantly to the integrated value per intensity bin per pixel block.
Also, the use of thresholds in determining whether a pixel block has changed significantly from the normal model and in determining whether a deviation from normal has occurred for a significant proportion of pixel blocks means that 'normal motion' does not cause a (false) detection of an abnormal condition.
Alternatively, other statistical means of determining deviation from the normal model may be used, for example Latent Dirichlet Allocation and clustering.
The above-described method initializes the normal model at step S3 of Figure 2. This method thus provides the best performance when conditions are normal when this step is performed. If, however, the conditions at this time are not normal, the normal model will be learned correctly over time, in particular by repeated execution of step S8 over a period of time.
Effects of the above-described embodiments will now be discussed. In the prior art, a streetlamp module includes a camera sensor used to detect the presence of a person close to the luminaire and turn on the lamp, which is normally dimmed. A detection and tracking algorithm is configured to function under specific low-light conditions typical of the installation. If the sensing conditions deviate significantly from the normal conditions, the detection algorithm will provide unreliable output and consequently the lighting system will exhibit an unpredictable, dangerous, and certainly undesirable, behavior. For example, if there is a thick fog, nothing can be detected below the fog curtain, and the light would most likely be off all the time. Vice versa, in case of snow the whole environment would be much brighter and most likely even small, non-relevant differences in the images would be detected, causing the lamp to continuously switch on and off. While specific camera-based fog or snow detectors can be designed, deviations from normal conditions are numerous and unpredictable, compromising operation of the system. The embodiments described above provide a lighting system with a sensor and processing unit which is capable of learning its normal operating conditions and detecting deviations from the normal model to trigger a fallback operating mode or an alarm signal. Moreover, this is achieved without requiring the system to be programmed to deal with specific abnormal conditions. This provides overall improved operation in the sense that abnormal conditions are more reliably detected, preventing incorrect operation of the streetlamp system in numerous, diverse situations.
In some embodiments, abnormal conditions are classified by the system 100 into different broad categories depending on the severity and type of deviation from the normal model. In these embodiments, different fallback modes and signals are triggered by the system 100, depending on the detected condition.
In some embodiments the streetlamp system 100 is connected to a network and control centre 111 to which other streetlamp systems also are connected. In these embodiments, distributed network information can be used to make more accurate and fast decisions. For example, if several streetlamp systems 100 seem to detect a heavy fog abnormal condition, it is perhaps likely that there is actually fog and the sensing module 112 can then trigger an abnormal operating condition detection without waiting the usual time (30 seconds in the above).
Once a particular abnormal condition is detected, recording of video can be triggered by the system 100, to allow inspection at a later stage. In the case of networked system, the recorded video could be sent to the control station 111 so as to inform relevant personnel or to allow manual validation of the abnormal condition.
Although in the above embodiments grey-scale histograms are used, alternatives are envisaged. For instance, in other embodiments histograms of oriented edges are used in place of the grey-level histograms described above. In some other embodiments, wavelet histograms, are used in place of the grey-level histograms described above. In still further embodiments, histograms of Local Binary Patterns are used in place of the grey-level histograms described above.
Although in the above embodiments a camera 101 is used to monitor the scene, it will be appreciated that other sensors that provide spatial information may instead be used. For instance, the camera 101 may be replaced by a spatial ultrasound sensor arrangement, a time of flight camera, RADAR, or a thermopile array. In these embodiments, data from each of plural spatially distinct elements is processed as detailed above. In some embodiments, the blocks of pixels allocated by feature extraction in step S2 may have special rules associated with them. For example, a scene may be divided into areas such as 'pavement', 'road', 'bus stop', and 'building' based on the amount of motion detected over a set period of time. To reduce processing burden, the motion detection algorithm does not process pixel blocks associated with the building as no relevant motion will ever be detected there. Also, the abnormal condition detection algorithm is configured not to consider pixel blocks associated with the bus stop in detecting an abnormal condition, for the reason that buses may stop there for relatively long periods and this is not indicative of an abnormal condition.
In some embodiments, pixel blocks may overlap. Alternatively there could be voids between the areas of sensitivity of adjacent pixel blocks.
The fallback mode described in step S10 may take several forms. In the embodiment of the streetlamp system, the fallback mode may be keeping the lamp on continuously. The alarm triggered may be an audible alarm which is part of the system itself, so that the public are aware of a dangerous condition, or the alarm may be an audio or visual alarm in a central control centre which makes only an operator aware of an abnormal condition.
Although in the above description, the normal is updated every time that a new image is captured, it will be appreciated that the normal may instead be updated less frequently.
In other embodiments, the camera 101 is replaced by a non-spatially aware sensor (not shown). Examples of suitable sensors are passive microphones and passive infrared (PIR) sensors. In these embodiments, the method of operation closely follows that described above for sensors with spatial resolution, with the following variation details.
In embodiments wherein the camera 101 is replaced by a passive microphone, step SI entails capturing a first acoustic profile of the scene to determine background noise level. Step S2 involves extracting representative signal features from the profile. In one embodiment, these features may be signal amplitude ("loudness"). Alternatively, they may be signal variance and dynamic range, signal pitch period and bandwidth or Mel- frequency cepstral coefficients. Histograms of the extracted features are concatenated or combined (e.g. weighted average). Histograms of such features are used as feature vectors to represent the scene content. The feature vectors are stored in memory 104.
At step S3, a normal model is initialized. This may involve processing the first profile using an algorithm, as is explained in more detail below. Alternatively, step S3 may involve reading a normal model stored in the memory 104, for instance from a factory setting or from previous operation of the streetlamp system 100. The normal model is initialized for the scene as a whole.
At step S4, the microphone 101 is controlled to capture a second acoustic profile, and its required features extracted in step S5, as described for S2. At step S6, this second profile is used to determine whether motion is present in the scene. Step S6 involves comparing the feature vector of the profile captured in step S4 with the features of the immediately preceding profile. In the first execution of step S6, the immediately preceding profile is the profile captured at step SI . In subsequent executions of step S6, the
immediately preceding profile is the profile that was captured on the preceding execution of step S4. Motion may be determined if the feature vectors change by some threshold level e.g. a detected sound source increases or decreases in amplitude. Alternatively a particular redshift or blueshift in the source frequency may represent motion be detected. If motion is detected in step S6, the method proceeds to step S7.
At step S7, the device 102 is configured to act upon motion detection. For example, step S7 may involve activating the lamp 109, if it is not already activated. If it is already activated, it may involve maintaining the lamp 109 activated. Step S7 may also involve storing a record of a time at which the lamp 109 was activated. If the activation of the lamp 105 is timer based, in that the lamp remains activated for a set time period after motion is last detected, step S7 may involve resetting the timer. After step S7, the method proceeds to step S8.
If a motion is not detected at step S6 or following step S7, at step S8 the normal model is updated and the updated normal model is stored in the memory 105.
At step S9, abnormal condition detection is performed to determine if an abnormal condition is present. If no abnormal condition is detected at step S9 then the method returns to step S4. If an abnormal condition is detected at step S9, an alarm or fallback mode is entered at step S10. Step S10 may involve the triggering of an alarm. The alarm may be generated locally at the streetlamp system 100 or may be communicated to the control centre 111. Step S10 may alternatively or in addition involve triggering fallback actions. A fallback action may be illumination of the lamp 109. Step S10 also inhibits the updating of the normal in step S8. Step S10 may involve undoing any updates to the normal that have been provided in a period of time prior to step S10 being performed. After step S10, the method returns to step S4. Step S4 is performed periodically. Intervals between successive performances of step S4 impacts a number of factors, including the amount of processing required and the ability of the system to detect motion. In these embodiments, step S4 is performed at 200 ms intervals, so 5 times per second, in order to generate N samples, although alternative intervals also are envisaged.
The initialization of the normal at step S3 and the updating of the normal at step S8 will now be described for this non-spatially-aware system without spatial resolution.
The normal model is initialized with the feature vector computed for the first captured N samples. The learning is then achieved by updating the normal model online, e.g. every time a new sample set is captured, using the new feature vector computed for the most recent profile. This update step can be done in several ways. Different embodiments use clustering, neural networks, Support Vector Machines and other suitable update techniques.
Learning is achieved in this particular example by updating the feature vector of the scene normal model using an exponentially weighted averaging rule. Assume that we are at time t, the scene normal model feature vector, NormFeatureVect, is updated using the feature vector computed for the current set of N samples, FeatureVectt, according to the rule:
NormFeatureVectt = (1-a)* NormFeatureVectt_i + a*FeatureVectt (3) where a is a constant.
The value of a determines how quickly the normal models adapts to new observations. In these embodiments, the value of a is small, so the normal model adapts slowly. The value of a may for instance be 0.001. It may take a value in the range 0.0001 to 0.01, more preferably in the range 0.0005 to 0.005.
The value of a determines how quickly the normal model adapts to new observations: if it is equal to 0 there is no learning, if it is 1 the normal model is equal to the most recent observation. Values in between are those typically used, and a slowly- learning a (e.g. 0.001) is recommended for this application.
The processing provided by step S9 may be as will now be described.
The abnormal condition detection module continuously compares the feature vector computed for the current dataset of N samples. In this example, the Chi-square distance between feature vectors is used to determine histogram distance. The module estimates how different the current and normal feature vectors are. If the difference from the normal model is significantly large (e.g. larger than a predefined or learned threshold) for a long time (e.g. 30 seconds), then the system 100 determines the presence of an abnormal condition.
Alternatively, other statistical means of determining deviation from the normal model may be used, for example Latent Dirichlet Allocation and clustering
Since the normal model evolves relatively slowly, the effect of any motion on the normal model is low. Any 'normal motion', whereby an object only produces sound for a short period of time, e.g. a bird flying closely in front of the microphone, does not contribute significantly to the integrated value per intensity bin.
Also, the use of thresholds in determining whether a detected scene has changed significantly from the normal model and in determining whether a deviation from normal has occurred means that 'normal motion' does not cause a (false) detection of an abnormal condition.
Alternatively, other statistical means of determining deviation from the normal model may be used, for example Latent Dirichlet Allocation and clustering.
The above-described method initializes the normal model at step S3 of Figure 2. This method thus provides the best performance when conditions are normal when this step is performed. If, however, the conditions at this time are not normal, the normal model will be learned correctly over time, in particular by repeated execution of step S8 over a period of time.
Effects of the above-described embodiments will now be discussed. In the prior art, it is envisaged that a streetlamp module could be provided with a microphone sensor and use it to detect the presence of a person close to the luminaire and turn on the lamp. A detection algorithm is configured to function under specific background level conditions typical of the installation. If the sensing conditions deviate significantly from the normal conditions, the detection algorithm might normally provide unreliable output and consequently the lighting system could exhibit undesirable behavior. Such might occur if for example a car alarm is activated in proximity to the streetlamp. However, using the above- described system, the streetlamp module would identify the presence of the car alarm as abnormal operating conditions and then enter a fallback or alarm mode. Other abnormal conditions can also be detected by the system, triggering entering of the fallback or alarm mode.
In embodiments using a PIR sensor in place of the camera 101, operation is similar to that described above with relation to a microphone sensor. The RF transceiver 110 is one example of a communication module, and may be replaced with an optical or microwave communications module or a wired connection to a network, for instance.
It will be appreciated that the term "comprising" does not exclude other elements or steps and that the indefinite article "a" or "an" does not exclude a plurality. A single processor may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to an advantage. Any reference signs in the claims should not be construed as limiting the scope of the claims.
Although claims have been formulated in this application to particular combinations of features, it should be understood that the scope of the disclosure of the present invention also includes any novel features or any novel combinations of features disclosed herein either explicitly or implicitly or any generalization thereof, whether or not it relates to the same invention as presently claimed in any claim and whether or not it mitigates any or all of the same technical problems as does the parent invention. The applicants hereby give notice that new claims may be formulated to such features and/or combinations of features during the prosecution of the present application or of any further application derived there from.
Other modifications and variations falling within the scope of the claims hereinafter will be evident to those skilled in the art.

Claims

CLAIMS:
1. Apparatus for controlling a lighting system, the apparatus comprising:
a processor for processing outputs of one or more sensors to monitor a scene observed by the sensors;
a first trigger responsive to detecting motion in the scene by triggering a first action comprising switching on a lamp;
an abnormal condition detector arranged to:
learn a normal scene; and
determine that an abnormal condition exists when a significant change in the normal scene is detected; and
a second trigger responsive to detecting an abnormal condition in the scene by triggering a second action.
2. Apparatus as claimed in claim 1, wherein the outputs of the one or more sensors are spatially distinct and wherein the abnormal condition detector is arranged to determine a significant change in the normal scene by determining a number of elements of outputs of the one or more sensors that deviate from the normal by a threshold amount.
3. Apparatus as claimed in claim 2, wherein the abnormal condition detector is arranged to determine a significant change in the normal scene by comparing the number of elements of outputs of the one or more sensors that deviate from the normal by a threshold amount to a second threshold.
4. Apparatus as claimed in any preceding claim, wherein the abnormal condition detector is configured to detect change by monitoring changes in sensor output over a first period of time, and to detect an abnormal condition by monitoring changes in sensor output over a second period of time, wherein the second period of time is longer than that first period of time.
5. Apparatus as claimed in any preceding claim, wherein the processor is arranged to process outputs of the one or more sensors to monitor the scene by monitoring plural elements of outputs of the one or more sensors each element comprising two or more adjacent pixels of the one or more sensors.
6. Apparatus as claimed in any preceding claim, wherein the sensor is an optical camera.
7. Apparatus as claimed in any preceding claim, wherein the abnormal condition detector is arranged to learn a normal scene by generating a histogram for each element, each histogram comprising an occurrence value for each of plural bins of the histogram.
8. Apparatus as claimed in claim 7, wherein different ones of the bins of the histogram relate to different intensities or ranges of intensity.
9 Apparatus as claimed in claim 7 or claim 8, wherein the abnormal condition detector is arranged to learn a normal scene by calculating a weighted moving average of the histograms.
10. Apparatus as claimed in any of claims 7 to 9, wherein the abnormal condition detector is arranged to determine whether each element has deviated significantly from the normal by calculating a Chi-square histogram difference.
11. A method for controlling a lighting system, the method comprising:
processing outputs of one or more sensors to monitor a scene observed by the sensors;
in response to detecting motion in the scene, triggering a first action comprising switching on a lamp; and
in response to detecting an abnormal condition in the scene, triggering a second action, wherein detecting an abnormal condition comprises:
learning a normal scene; and
determining that an abnormal condition exists when a significant change in the normal scene is detected.
12. A method as claimed in claim 11 , wherein the outputs of the one or more sensors are spatially distinct and wherein determining a significant change in the normal scene comprises determining a number of elements of the outputs of the one or more sensors that deviate from the normal by a threshold amount.
13. A method as claimed in claim 11 or claim 12, wherein detecting change comprises monitoring changes in sensor output over a first period of time, and wherein detecting an abnormal condition comprises monitoring changes in sensor output over a second period of time, wherein the second period of time is longer than that first period of time.
14. A computer program comprising machine readable instructions that when executed by computing apparatus control it to perform the method of claim 12 or claim 13.
15. Apparatus comprising one or more processors, one or more memories and computer code stored in the one or more memories, the computer code being configured to control the one or more processors to perform a method of controlling a lighting system comprising:
processing outputs of one or more sensors to monitor a scene observed by the sensors;
in response to detecting motion in the scene, triggering a first action comprising switching on a lamp; and
in response to detecting an abnormal condition in the scene, triggering a second action, wherein detecting an abnormal condition comprises:
learning a normal scene; and
determining that an abnormal condition exists when a significant change in the normal scene is detected.
PCT/IB2012/057420 2011-12-22 2012-12-18 Monitoring a scene WO2013093771A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201161578950P 2011-12-22 2011-12-22
US61/578,950 2011-12-22

Publications (1)

Publication Number Publication Date
WO2013093771A1 true WO2013093771A1 (en) 2013-06-27

Family

ID=47666432

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2012/057420 WO2013093771A1 (en) 2011-12-22 2012-12-18 Monitoring a scene

Country Status (1)

Country Link
WO (1) WO2013093771A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3064042A1 (en) * 2013-10-29 2016-09-07 CP Electronics Limited Apparatus for controlling an electrical load
CN107067595A (en) * 2017-04-28 2017-08-18 南京国电南思科技发展股份有限公司 State identification method, device and the electronic equipment of a kind of indicator lamp
GB2549074A (en) * 2016-03-24 2017-10-11 Imagination Tech Ltd Learned feature motion detection
US10531539B2 (en) 2016-03-02 2020-01-07 Signify Holding B.V. Method for characterizing illumination of a target surface
US11064591B2 (en) 2016-09-22 2021-07-13 Signify Holding B.V. Flooding localization and signalling via intelligent lighting
US11749100B2 (en) 2019-05-30 2023-09-05 Signify Holding B.V. System and methods to provide emergency support using lighting infrastructure
CN117177418A (en) * 2023-10-31 2023-12-05 宝邑(深圳)照明科技有限公司 Method, device, equipment and storage medium for controlling intelligent indoor illumination of building

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7110454B1 (en) * 1999-12-21 2006-09-19 Siemens Corporate Research, Inc. Integrated method for scene change detection
US20080265799A1 (en) 2007-04-20 2008-10-30 Sibert W Olin Illumination control network
US20100026734A1 (en) * 2006-12-20 2010-02-04 Koninklijke Philips Electronics N.V. System, method and computer-readable medium for displaying light radiation
US20100259197A1 (en) * 2007-11-06 2010-10-14 Koninklijke Philips Electronics N.V. Light control system and method for automatically rendering a lighting scene
US20110251725A1 (en) * 2010-04-08 2011-10-13 Mark Kit Jiun Chan Utility control system
US20110273114A1 (en) * 2007-05-22 2011-11-10 Koninklijke Philips Electronics N.V. Remote lighting control

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7110454B1 (en) * 1999-12-21 2006-09-19 Siemens Corporate Research, Inc. Integrated method for scene change detection
US20100026734A1 (en) * 2006-12-20 2010-02-04 Koninklijke Philips Electronics N.V. System, method and computer-readable medium for displaying light radiation
US20080265799A1 (en) 2007-04-20 2008-10-30 Sibert W Olin Illumination control network
US20110273114A1 (en) * 2007-05-22 2011-11-10 Koninklijke Philips Electronics N.V. Remote lighting control
US20100259197A1 (en) * 2007-11-06 2010-10-14 Koninklijke Philips Electronics N.V. Light control system and method for automatically rendering a lighting scene
US20110251725A1 (en) * 2010-04-08 2011-10-13 Mark Kit Jiun Chan Utility control system

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3064042A1 (en) * 2013-10-29 2016-09-07 CP Electronics Limited Apparatus for controlling an electrical load
US10531539B2 (en) 2016-03-02 2020-01-07 Signify Holding B.V. Method for characterizing illumination of a target surface
EP3223239A3 (en) * 2016-03-24 2017-11-29 Imagination Technologies Limited Learned feature motion detection
GB2549074A (en) * 2016-03-24 2017-10-11 Imagination Tech Ltd Learned feature motion detection
GB2549074B (en) * 2016-03-24 2019-07-17 Imagination Tech Ltd Learned feature motion detection
US10395102B2 (en) 2016-03-24 2019-08-27 Imagination Technologies Limited Learned feature motion detection
US11068703B2 (en) 2016-03-24 2021-07-20 Imagination Technologies Limited Learned feature motion detection
US11676288B2 (en) 2016-03-24 2023-06-13 Imagination Technologies Limited Learned feature motion detection
US11064591B2 (en) 2016-09-22 2021-07-13 Signify Holding B.V. Flooding localization and signalling via intelligent lighting
CN107067595A (en) * 2017-04-28 2017-08-18 南京国电南思科技发展股份有限公司 State identification method, device and the electronic equipment of a kind of indicator lamp
CN107067595B (en) * 2017-04-28 2020-05-05 南京国电南思科技发展股份有限公司 State identification method and device of indicator light and electronic equipment
US11749100B2 (en) 2019-05-30 2023-09-05 Signify Holding B.V. System and methods to provide emergency support using lighting infrastructure
CN117177418A (en) * 2023-10-31 2023-12-05 宝邑(深圳)照明科技有限公司 Method, device, equipment and storage medium for controlling intelligent indoor illumination of building

Similar Documents

Publication Publication Date Title
WO2013093771A1 (en) Monitoring a scene
US9367925B2 (en) Image detection and processing for building control
US10187574B1 (en) Power-saving battery-operated camera
EP2461300B1 (en) Smoke detecting apparatus
US8786198B2 (en) System and methods for automatically configuring of lighting parameters
KR101835552B1 (en) Control center system of working environment for smart factory
KR102281918B1 (en) Smart lamp system based multi sensor object sensing
KR20180103596A (en) System for intelligent safety lighting
JP2011123742A (en) Intruding object detector
KR101454644B1 (en) Loitering Detection Using a Pedestrian Tracker
US9007459B2 (en) Method to monitor an area
US9443150B2 (en) Device and method for detecting objects from a video signal
CN109844825B (en) Presence detection system and method
JP7125843B2 (en) Fault detection system
US10477647B2 (en) Adaptive visual intelligence outdoor motion/occupancy and luminance detection system
CN113076791A (en) Proximity object detection for surveillance cameras
KR101581162B1 (en) Automatic detection method, apparatus and system of flame, smoke and object movement based on real time images
US11644191B2 (en) NIR motion detection system and method
KR101826715B1 (en) System and method for detecting vehicle invasion using room camera
KR20170108564A (en) System and method for detecting vehicle invasion using image
CN109074714B (en) Detection apparatus, method and storage medium for detecting event
JP4925942B2 (en) Image sensor
JP6155106B2 (en) Image sensor
JP7328778B2 (en) Image processing device and image processing program
JP2021034763A (en) Image processing device and image processing program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12821313

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12821313

Country of ref document: EP

Kind code of ref document: A1