WO2013072807A1 - Method and apparatus for registering a scene - Google Patents

Method and apparatus for registering a scene Download PDF

Info

Publication number
WO2013072807A1
WO2013072807A1 PCT/IB2012/056188 IB2012056188W WO2013072807A1 WO 2013072807 A1 WO2013072807 A1 WO 2013072807A1 IB 2012056188 W IB2012056188 W IB 2012056188W WO 2013072807 A1 WO2013072807 A1 WO 2013072807A1
Authority
WO
WIPO (PCT)
Prior art keywords
images
image
series
pixels
motion
Prior art date
Application number
PCT/IB2012/056188
Other languages
French (fr)
Inventor
Tommaso Gritti
Adrienne Heinrich
Gerard De Haan
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2013072807A1 publication Critical patent/WO2013072807A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/38Registration of image sequences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/743Bracketing, i.e. taking a series of images with varying exposure conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/1961Movement detection not involving frame subtraction, e.g. motion detection on the basis of luminance changes in the image

Definitions

  • the present invention relates to a method and apparatus for registering a scene.
  • motion sensor devices are used to activate lighting systems, removing the need for a person to switch on lights manually when he enters a room or in security systems, for example to trigger a burglar alarm when an intruder enters an area under surveillance.
  • Digital video cameras can capture images over a wide area and perform an extensive set of algorithms so a large variety of information may be measured. Frequent software updates also lead to ever improving functionality. Since the technology surrounding digital video cameras has become more widespread and advanced, their costs have fallen. Consequently, digital video cameras have become more attractive for use in motion sensor devices.
  • a significant obstacle to the use of digital video cameras in motion sensor devices is the difficulty in obtaining images of scenes with a high dynamic range, the dynamic range being the range of luminances in the scene which can be captured in a single frame resulting in values within an analog-to-digitial converter range.
  • developing algorithms to process an image signal in such a way that can distinguish between different phenomena has proved difficult.
  • the change in pixel values obtained by the motion sensor device when a shadow moves across the scene, or any change in the lighting environment can be confused with the change in pixel values obtained when motion occurs.
  • the motion sensor device may indicate that motion has been detected in the scene under surveillance when, in fact, the lighting environment of the scene has changed and no motion has occurred therein.
  • objects to be monitored are illuminated by an array of infrared lasers and images of the object captured by an infrared sensitive camera.
  • the games console can process the captured image and, using its knowledge of the pattern created by the array of lasers, estimate the depth of each point. While this approach may be appropriate for systems where the cost of materials is not the overriding factor, the technology becomes prohibitively expensive for most lighting and security systems.
  • Lighting and security systems which utilize digital video cameras for motion detection require less expensive hardware to be utilized in order to be commercially viable. Furthermore, removing the need for external lighting to achieve a stable lighting environment is economically and environmentally beneficial. Improving the dynamic range of the digital camera is therefore a priority.
  • Digital cameras have a sensor which comprises an array of red, green and blue (RGB) pixels, each of which captures light during a predetermined exposure time (i.e. the time during which the pixel is exposed).
  • the amount of light captured depends on the exposure time and the size of the aperture letting light there through.
  • the sensor has a dynamic range of luminances which can be recorded and displayed.
  • some pixels within the array will likely have a luminance value outside the dynamic range due to under- or over-exposure.
  • Such pixels are referred to as "clipped pixels” in the art. Clipped pixels do not contribute as much information which is relied upon by motion sensors as those pixels with luminance values within the dynamic range and therefore the number of clipped pixels should be minimized by increasing the dynamic range of the motion sensor device.
  • a motion sensor device capable of capturing images with a high dynamic range using commercially affordable hardware is therefore desirable.
  • an apparatus for registering a scene comprising an image registration means arranged to register a first and second image series, wherein each series comprises a plurality of images, each of the plurality of images having an exposure time, and wherein at least one of the plurality of images has an exposure time which is different from the exposure times of the remaining images in the series; and a processing means configured to define a first group of pixels in an image in the first image series, define a second group of pixels corresponding spatially to the first group of pixels in an image in the second image series; perform an analysis of the first and second group of pixels; and based on the analysis, determine whether motion has occurred within the scene.
  • the processor may be further configured to store lower and upper luminance threshold values, determine whether a luminance value associated with the group of pixels is less than the lower luminance threshold value or greater than the upper luminance threshold value, and if so discard the group of pixels from the analysis.
  • the apparatus may further comprise means for storing motion history; and means for adjusting the exposure times of the plurality of images and/or the number of images within the plurality of images based on the motion history.
  • the apparatus may further comprise means for storing a predetermined frequency threshold and a motion frequency of the scene, means for determining whether the motion frequency of the scene is lower than the predetermined frequency threshold, and if so, decreasing the number of images in each series.
  • the analysis may comprise extracting texture information from images in the sequence of image series.
  • the processing means may be configured to output a determination that motion has occurred within the scene to a device.
  • the device may be a lighting controller.
  • the image registration means may be an image sensor.
  • a method for registering a scene comprising registering a first and second image series, wherein each series comprises a plurality of images, each of the plurality of images having an exposure time, and wherein at least one of the plurality of images has an exposure time which is different from the exposure times of the remaining images in the series; defining a first group of pixels in an image in the first image series, defining a second group of pixels corresponding spatially to the first group of pixels in an image in the second image series; performing an analysis of the first and second group of pixels; and determining, based on the analysis, whether motion has occurred within the scene.
  • the method may further comprise storing lower and upper luminance threshold values, determining whether a luminance value associated with the group of pixels is less than the lower luminance threshold value or greater than the upper luminance threshold value, and if so discarding the group of pixels from the analysis.
  • the method may also comprise storing motion history; and adjusting the exposure times of the plurality of images and/or the number of images within the plurality of images based on the motion history.
  • the method may further comprise storing a predetermined frequency threshold and a motion frequency of the scene, determining whether the motion frequency of the scene is lower than the predetermined frequency threshold, and if so, decreasing the number of images in each series.
  • Performing an analysis may comprise extracting texture information from images in the sequence of image series.
  • the method may further comprise outputting a determination that motion has occurred within the scene to a device.
  • the invention may also provide computer program instructions that, when executed on a computer, perform the method.
  • Figure 1 is a view of an embodiment of the invention in use
  • Figure 2 is a schematic view of a motion sensor device in accordance with the invention.
  • Figure 3 is a first diagrammatic view of a sensor
  • Figure 4 is a second diagrammatic view of a sensor
  • Figure 5 is a histogram in accordance with the invention.
  • Figure 6 is a schematic view of an application of the invention.
  • Figure 1 shows a room 1 in which movement is monitored.
  • the room 1 contains a motion sensor device 2 which is activated when motion is detected.
  • a person 3 may walk into the room 1, thus triggering the motion sensor 2.
  • the motion sensor device 2 may be in communication with a system of luminaires such that, when motion is detected, the luminaires are activated to provide lighting in the room 1.
  • the motion sensor device 2 may be connected to a security system such as a burglar alarm or telephone line to a police station.
  • Examples of rooms which may be monitored by a system according to the present invention are offices, factory warehouses, private dwellings, commercial/retail premises and so forth.
  • FIG. 2 is a schematic view of the motion sensor device 2.
  • the motion sensor device 2 comprises an image sensor 4, a processor 5, a memory 6 and a power source 7.
  • the image sensor 4 may be a Complementary Metal Oxide Silicon (CMOS) camera with integrated analogue to digital converter.
  • CMOS cameras are advantageous since they tend to consume less power.
  • An advantage of the present invention is that a minimal level of modification of known image capture hardware is required. As such, a detailed description of such known hardware will be omitted since it will be common knowledge to the person skilled in the art.
  • Images are captured by the image sensor 4 by exposing an array of pixels to light for a predetermined exposure time.
  • the exposure time can be varied to best suit the environment in which images are to be captured. For example, in bright environments it is desirable to use a short exposure time so that the captured image is not saturated with light. In darker environments it is desirable to use a longer exposure time to ensure a sufficient amount of light is captured.
  • the processor 5 controls the image sensor 4 to capture a sequence of image series n, n+1, ....
  • Each image series n comprises several images E1-E4, each image within the series being taken with a different exposure time.
  • Figure 3 shows two of the image series n and n+1 captured by the motion sensor device 2 which were obtained as changes in the lighting environment occurred.
  • Each image E1-E4 is divided into groups of pixels represented by boxes in the grids shown in Figure 3. As the exposure time varies luminance values vary. In the series of images n and n+1 pixel groups are recorded. Each pixel group will have a luminance value.
  • luminance values, A-D are shown for some of the pixel groups in the images E1-E4. For the convenience of the reader the luminance values of the remaining pixel groups are not shown.
  • Luminance value A is recorded in image E2 of series n and in image El of series n+1. This indicates that a different exposure time is required to record the same luminance value.
  • the processor 5 analyses the changes in the luminance values of the pixel groups from one series of images to the next.
  • the changes observed in luminance values A-D are incremental with only luminance value A switching from image E2 to image El between series n and n+1.
  • This strong correlation between image series n and n+1 shows that the image texture between recorded series of images n and n+1 does not vary significantly.
  • Image texture that does not vary significantly is indicative of a change in lighting
  • FIG. 4 shows two series of images m and m+1. Each series is made up of four images E1-E4. In the series m four pixel groups have been shown which yielded luminance values F, G, H and I.
  • the pixel blocks which yielded luminance values F, H, I in the series m now yield luminance values X, Y and Z.
  • the pixel group which yielded the luminance value G in image E2 of series m yields a luminance value G in image E3 of image series m+1.
  • the change in luminance value from F, H, I to X, Y, Z indicates that motion has occurred in the scene. This is because the object that was captured by the sensor and which led to the luminance values F, H, I in the image series m is no longer present when the image series m+1 is captured. An abrupt change in signal output is observed (from F, H, I to X, Y, Z) rather than the low variation in texture observed when only a change in the lighting environment is observed as described above with reference to Figure 3. This may be due to the object being occluded by some other object or due to the object being removed from the scene.
  • the presence of the luminance value G in image E3 of series m+1 shows a low variation in texture in this region of the scene of the type described above with reference to Figure 3 and indicates that this region of the scene has not been occluded.
  • the change in the presence of signal G from image E2 to E3 between series m and m+1 is indicative of some sort of change in the lighting environment. This may be due to a shadow cast by the newly present object in the scene which led to the change from signals F, H, I to X, Y, Z.
  • Figure 5 shows a histogram in which the luminance is displayed along the x-axis and pixel number along the y-axis. Only those pixel groups with signal values in the ⁇ range, which extends from a minimum luminance value Lmin to a maximum luminance value Lmax, are retained.
  • Pixel groups with a luminance value below Lmin or above Lmax in a particular image within a series are ignored. Instead, these pixel groups are analyzed in those images within the same series where the exposure time gives rise to a luminance value within the acceptable range ⁇ . As such, a more robust image analysis is achieved.
  • the motion sensor device 2 may be configured to control exposure time given information extracted from previous exposures. An example of this would involve refining an initial exposure minimum and maximum time and the number of required exposures (i.e. images) in a series until no pixels in the image are recorded as "clipped". The information extracted from the image can also be related to the presence or absence of a moving object, tuning the exposure further if an object is moving in a region.
  • the motion sensor device 2 may be further configured to set the number of images to be captured in a series and their exposure time according to the frequency of motion observed in an area. As an example, if motion is observed in an area of the image in which motion is often observed, more exposures can be used to make sure the observation is accurate, while fewer exposures could be used if motion is observed in areas of the image in which little motion typically occurs.
  • FIG 6 is a schematic view of a motion sensitive lighting system S which may be used in a situation such as that shown in Figure 1.
  • the lighting system S uses the motion sensor device 2 described above connected to a lighting controller 8 and luminaires 9 via a wired or wireless network connection 10.
  • the processor 5 may determine that motion has occurred in a particular location within a premises and output this information to the lighting controller 8.
  • the lighting controller 8 then activates luminaires 9 situated in the area where motion is detected. Luminaires 9 outside the area where motion is detected are not activated.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

A method and for registering a scene which involves image registration means arranged to register a first and second image series, wherein each series comprises a plurality of images, each of the plurality of images having an exposure time, and wherein at least one of the plurality of images has an exposure time which is different from the exposure times of the remaining images in the series; and a processing means configured to define a first group of pixels in an image in the first image series, define a second group of pixels corresponding spatially to the first group of pixels in an image in the second image series; perform an analysis of the first and second group of pixels; and based on the analysis, determine whether motion has occurred within the scene, is disclosed.

Description

Method and apparatus for registering a scene
FIELD OF THE INVENTION
The present invention relates to a method and apparatus for registering a scene. BACKGROUND OF THE INVENTION
In commercial and residential premises, motion sensor devices are used to activate lighting systems, removing the need for a person to switch on lights manually when he enters a room or in security systems, for example to trigger a burglar alarm when an intruder enters an area under surveillance.
Digital video cameras can capture images over a wide area and perform an extensive set of algorithms so a large variety of information may be measured. Frequent software updates also lead to ever improving functionality. Since the technology surrounding digital video cameras has become more widespread and advanced, their costs have fallen. Consequently, digital video cameras have become more attractive for use in motion sensor devices.
A significant obstacle to the use of digital video cameras in motion sensor devices is the difficulty in obtaining images of scenes with a high dynamic range, the dynamic range being the range of luminances in the scene which can be captured in a single frame resulting in values within an analog-to-digitial converter range. In particular, developing algorithms to process an image signal in such a way that can distinguish between different phenomena has proved difficult.
For example, the change in pixel values obtained by the motion sensor device when a shadow moves across the scene, or any change in the lighting environment, can be confused with the change in pixel values obtained when motion occurs. The result is that the motion sensor device may indicate that motion has been detected in the scene under surveillance when, in fact, the lighting environment of the scene has changed and no motion has occurred therein.
Various attempts have been made to overcome this obstacle to robust motion detection. In one approach, the environment of the scene under surveillance is strictly controlled. This approach involves controlling the lighting and orientation of objects being monitored as well as the distance between the objects and the motion sensor device. This typically applies to industrial vision systems. In scenarios in which full control over object position and size with respect to the camera is not attractive, such as games consoles, alternative approaches based on active illumination are known.
In games consoles which rely on the participants movements to control a game's characters, objects to be monitored are illuminated by an array of infrared lasers and images of the object captured by an infrared sensitive camera. The games console can process the captured image and, using its knowledge of the pattern created by the array of lasers, estimate the depth of each point. While this approach may be appropriate for systems where the cost of materials is not the overriding factor, the technology becomes prohibitively expensive for most lighting and security systems.
Lighting and security systems which utilize digital video cameras for motion detection require less expensive hardware to be utilized in order to be commercially viable. Furthermore, removing the need for external lighting to achieve a stable lighting environment is economically and environmentally beneficial. Improving the dynamic range of the digital camera is therefore a priority.
Digital cameras have a sensor which comprises an array of red, green and blue (RGB) pixels, each of which captures light during a predetermined exposure time (i.e. the time during which the pixel is exposed). The amount of light captured depends on the exposure time and the size of the aperture letting light there through. For a particular combination of exposure time and aperture size the sensor has a dynamic range of luminances which can be recorded and displayed. However, for each combination of exposure time and aperture size some pixels within the array will likely have a luminance value outside the dynamic range due to under- or over-exposure. Such pixels are referred to as "clipped pixels" in the art. Clipped pixels do not contribute as much information which is relied upon by motion sensors as those pixels with luminance values within the dynamic range and therefore the number of clipped pixels should be minimized by increasing the dynamic range of the motion sensor device.
Increases in the dynamic range of motion sensor devices have been achieved using various techniques such as dynamic filtering where the exposure times of individual pixels is changed depending on the amount of light they receive. Additionally, offsetting pixel sample and digitization times, adopting a logarithmic response sensor and combining linear and logarithmic responses have been used to improve the dynamic range obtained. However, such approaches are very expensive since they require specialized hardware, thus rendering them unsuitable for motion sensor devices sold in large volumes.
A motion sensor device capable of capturing images with a high dynamic range using commercially affordable hardware is therefore desirable.
SUMMARY OF THE INVENTION
According to the invention there is provided an apparatus for registering a scene, the apparatus comprising an image registration means arranged to register a first and second image series, wherein each series comprises a plurality of images, each of the plurality of images having an exposure time, and wherein at least one of the plurality of images has an exposure time which is different from the exposure times of the remaining images in the series; and a processing means configured to define a first group of pixels in an image in the first image series, define a second group of pixels corresponding spatially to the first group of pixels in an image in the second image series; perform an analysis of the first and second group of pixels; and based on the analysis, determine whether motion has occurred within the scene.
The processor may be further configured to store lower and upper luminance threshold values, determine whether a luminance value associated with the group of pixels is less than the lower luminance threshold value or greater than the upper luminance threshold value, and if so discard the group of pixels from the analysis.
The apparatus may further comprise means for storing motion history; and means for adjusting the exposure times of the plurality of images and/or the number of images within the plurality of images based on the motion history.
The apparatus may further comprise means for storing a predetermined frequency threshold and a motion frequency of the scene, means for determining whether the motion frequency of the scene is lower than the predetermined frequency threshold, and if so, decreasing the number of images in each series.
The analysis may comprise extracting texture information from images in the sequence of image series.
The processing means may be configured to output a determination that motion has occurred within the scene to a device.
The device may be a lighting controller.
The image registration means may be an image sensor. According to the invention there is also provided a method for registering a scene, the method comprising registering a first and second image series, wherein each series comprises a plurality of images, each of the plurality of images having an exposure time, and wherein at least one of the plurality of images has an exposure time which is different from the exposure times of the remaining images in the series; defining a first group of pixels in an image in the first image series, defining a second group of pixels corresponding spatially to the first group of pixels in an image in the second image series; performing an analysis of the first and second group of pixels; and determining, based on the analysis, whether motion has occurred within the scene.
The method may further comprise storing lower and upper luminance threshold values, determining whether a luminance value associated with the group of pixels is less than the lower luminance threshold value or greater than the upper luminance threshold value, and if so discarding the group of pixels from the analysis.
The method may also comprise storing motion history; and adjusting the exposure times of the plurality of images and/or the number of images within the plurality of images based on the motion history.
The method may further comprise storing a predetermined frequency threshold and a motion frequency of the scene, determining whether the motion frequency of the scene is lower than the predetermined frequency threshold, and if so, decreasing the number of images in each series.
Performing an analysis may comprise extracting texture information from images in the sequence of image series.
The method may further comprise outputting a determination that motion has occurred within the scene to a device.
The invention may also provide computer program instructions that, when executed on a computer, perform the method.
BRIEF DESCRIPTION OF THE DRAWINGS
So that the invention may be understood an embodiment thereof will be described with reference to the accompanying drawings in which:
Figure 1 is a view of an embodiment of the invention in use;
Figure 2 is a schematic view of a motion sensor device in accordance with the invention;
Figure 3 is a first diagrammatic view of a sensor; Figure 4 is a second diagrammatic view of a sensor;
Figure 5 is a histogram in accordance with the invention; and
Figure 6 is a schematic view of an application of the invention.
DETAILED DESCRIPTION
Figure 1 shows a room 1 in which movement is monitored. The room 1 contains a motion sensor device 2 which is activated when motion is detected. For example, a person 3 may walk into the room 1, thus triggering the motion sensor 2. The motion sensor device 2 may be in communication with a system of luminaires such that, when motion is detected, the luminaires are activated to provide lighting in the room 1. Alternatively, the motion sensor device 2 may be connected to a security system such as a burglar alarm or telephone line to a police station.
Examples of rooms which may be monitored by a system according to the present invention are offices, factory warehouses, private dwellings, commercial/retail premises and so forth.
Figure 2 is a schematic view of the motion sensor device 2. The motion sensor device 2 comprises an image sensor 4, a processor 5, a memory 6 and a power source 7. The image sensor 4 may be a Complementary Metal Oxide Silicon (CMOS) camera with integrated analogue to digital converter. CMOS cameras are advantageous since they tend to consume less power. An advantage of the present invention is that a minimal level of modification of known image capture hardware is required. As such, a detailed description of such known hardware will be omitted since it will be common knowledge to the person skilled in the art.
Images are captured by the image sensor 4 by exposing an array of pixels to light for a predetermined exposure time. The exposure time can be varied to best suit the environment in which images are to be captured. For example, in bright environments it is desirable to use a short exposure time so that the captured image is not saturated with light. In darker environments it is desirable to use a longer exposure time to ensure a sufficient amount of light is captured.
To capture images with a wide dynamic range prior systems have attempted to control the environment, in other words, the amount of light present in scenes is controlled using external lighting so that an appropriate exposure may be set for images to be captured.
In contrast, the processor 5 controls the image sensor 4 to capture a sequence of image series n, n+1, .... Each image series n comprises several images E1-E4, each image within the series being taken with a different exposure time. Figure 3 shows two of the image series n and n+1 captured by the motion sensor device 2 which were obtained as changes in the lighting environment occurred.
Each image E1-E4 is divided into groups of pixels represented by boxes in the grids shown in Figure 3. As the exposure time varies luminance values vary. In the series of images n and n+1 pixel groups are recorded. Each pixel group will have a luminance value.
Referring to the series of images n, luminance values, A-D, are shown for some of the pixel groups in the images E1-E4. For the convenience of the reader the luminance values of the remaining pixel groups are not shown.
After images E1-E4 of series n have been captured the image sensor 4 captures image series n+1. As can be seen in series n+1, the pixel groups which gave luminance value
B in image E3 and luminance values C and D in series n give the same luminance values in the same images of series n+1. In other words, these luminance values are captured in images of the same exposure time in both series n and n+1.
Luminance value A is recorded in image E2 of series n and in image El of series n+1. This indicates that a different exposure time is required to record the same luminance value.
The processor 5 analyses the changes in the luminance values of the pixel groups from one series of images to the next. The changes observed in luminance values A-D are incremental with only luminance value A switching from image E2 to image El between series n and n+1. This strong correlation between image series n and n+1 shows that the image texture between recorded series of images n and n+1 does not vary significantly.
Image texture that does not vary significantly is indicative of a change in lighting
environment such as as a shadow moving across the scene rather than object movement.
Figure 4 shows two series of images m and m+1. Each series is made up of four images E1-E4. In the series m four pixel groups have been shown which yielded luminance values F, G, H and I.
In the image series m+1 the pixel blocks which yielded luminance values F, H, I in the series m now yield luminance values X, Y and Z. The pixel group which yielded the luminance value G in image E2 of series m yields a luminance value G in image E3 of image series m+1.
The change in luminance value from F, H, I to X, Y, Z indicates that motion has occurred in the scene. This is because the object that was captured by the sensor and which led to the luminance values F, H, I in the image series m is no longer present when the image series m+1 is captured. An abrupt change in signal output is observed (from F, H, I to X, Y, Z) rather than the low variation in texture observed when only a change in the lighting environment is observed as described above with reference to Figure 3. This may be due to the object being occluded by some other object or due to the object being removed from the scene.
The presence of the luminance value G in image E3 of series m+1 shows a low variation in texture in this region of the scene of the type described above with reference to Figure 3 and indicates that this region of the scene has not been occluded. The change in the presence of signal G from image E2 to E3 between series m and m+1 is indicative of some sort of change in the lighting environment. This may be due to a shadow cast by the newly present object in the scene which led to the change from signals F, H, I to X, Y, Z.
It is desirable that the pixel groups return luminance values within an acceptable range to avoid the presence of "clipped" pixel groups which do not contribute as much information as those within the acceptable range. Figure 5 shows a histogram in which the luminance is displayed along the x-axis and pixel number along the y-axis. Only those pixel groups with signal values in the Δ range, which extends from a minimum luminance value Lmin to a maximum luminance value Lmax, are retained.
Pixel groups with a luminance value below Lmin or above Lmax in a particular image within a series are ignored. Instead, these pixel groups are analyzed in those images within the same series where the exposure time gives rise to a luminance value within the acceptable range Δ. As such, a more robust image analysis is achieved.
To conserve processing time and resources further intelligence may be added to the method described above. Since the image sensors gather images over an extended time period, it is possible to gather and store information regarding the locations motion has been previously detected. In locations where little movement has been detected fewer series of images may be captured. This approach has the advantage that, based on the information stored in the memory of the motion sensor device, the processor need process a lower volume of data.
The motion sensor device 2 may be configured to control exposure time given information extracted from previous exposures. An example of this would involve refining an initial exposure minimum and maximum time and the number of required exposures (i.e. images) in a series until no pixels in the image are recorded as "clipped". The information extracted from the image can also be related to the presence or absence of a moving object, tuning the exposure further if an object is moving in a region. The motion sensor device 2 may be further configured to set the number of images to be captured in a series and their exposure time according to the frequency of motion observed in an area. As an example, if motion is observed in an area of the image in which motion is often observed, more exposures can be used to make sure the observation is accurate, while fewer exposures could be used if motion is observed in areas of the image in which little motion typically occurs.
By capturing a series of images n with a variety of different exposure times, drawbacks associated with image capture in a real-life environment are minimized.
Figure 6 is a schematic view of a motion sensitive lighting system S which may be used in a situation such as that shown in Figure 1. The lighting system S uses the motion sensor device 2 described above connected to a lighting controller 8 and luminaires 9 via a wired or wireless network connection 10. The processor 5 may determine that motion has occurred in a particular location within a premises and output this information to the lighting controller 8. The lighting controller 8 then activates luminaires 9 situated in the area where motion is detected. Luminaires 9 outside the area where motion is detected are not activated.
It will be appreciated that the term "comprising" does not exclude other elements or steps and that the indefinite article "a" or "an" does not exclude a plurality. A single processor may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to an advantage. Any reference signs in the claims should not be construed as limiting the scope of the claims.
Although claims have been formulated in this application to particular combinations of features, it should be understood that the scope of the disclosure of the present invention also includes any novel features or any novel combinations of features disclosed herein either explicitly or implicitly or any generalization thereof, whether or not it relates to the same invention as presently claimed in any claim and whether or not it mitigates any or all of the same technical problems as does the parent invention. The applicants hereby give notice that new claims may be formulated to such features and/or combinations of features during the prosecution of the present application or of any further application derived there from.
Other modifications and variations falling within the scope of the claims hereinafter will be evident to those skilled in the art.

Claims

CLAIMS:
1. An apparatus for registering a scene, the apparatus comprising:
an image registration means arranged to register a first and second image series (n, n+1),
wherein each series comprises a plurality of images (E1-E4), each of the plurality of images having an exposure time, and
wherein at least one of the plurality of images has an exposure time which is different from the exposure times of the remaining images in the series; and
a processing means (5) configured to:
define a first group of pixels in an image in the first image series, define a second group of pixels corresponding spatially to the first group of pixels in an image in the second image series;
perform an analysis of the first and second group of pixels; and based on the analysis, determine whether motion has occurred within the scene.
2. An apparatus according to claim 1, wherein the processor is further configured to:
store lower and upper luminance threshold values (Lmin, Lmax), determine whether a luminance value associated with a group of pixels is less than the lower luminance threshold value or greater than the upper luminance threshold value, and if so
discard the group of pixels from the analysis.
3. An apparatus according to either claim 1 or claim 2, further comprising:
means for storing (6) motion history; and
means for adjusting the exposure times of the plurality of images and/or the number of images within the plurality of images based on the motion history.
4. An apparatus according to claim 3, further comprising:
means for storing (6) a predetermined frequency threshold and a motion frequency of the scene,
means for determining whether the motion frequency of the scene is lower than the predetermined frequency threshold, and if so,
decreasing the number of images in each series.
5. An apparatus according to any preceding claim, wherein the analysis comprises extracting texture information from images in the sequence of image series.
6. An apparatus according to any preceding claim, wherein the processing means is configured to output a determination that motion has occurred within the scene to a device.
7. An apparatus according to claim 6, wherein the device is a lighting controller (8).
8. An apparatus according to any preceding claim, wherein the image registration means is an image sensor (4).
9. A method for registering a scene, the method comprising:
registering a first and second image series (n, n+1),
wherein each series comprises a plurality of images (E1-E4), each of the plurality of images having an exposure time, and
wherein at least one of the plurality of images has an exposure time which is different from the exposure times of the remaining images in the series;
defining a first group of pixels in an image in the first image series, defining a second group of pixels corresponding spatially to the first group of pixels in an image in the second image series;
performing an analysis of the first and second group of pixels; and determining, based on the analysis, whether motion has occurred within the scene.
10. A method according to claim 9, further comprising:
storing lower and upper luminance threshold values (Lmin, Lmax), determining whether a luminance value associated with the group of pixels is less than the lower luminance threshold value or greater than the upper luminance threshold value, and if so
discarding the group of pixels from the analysis.
11. A method according to either claim 9 or claim 10, further comprising:
storing motion history; and
adjusting the exposure times of the plurality of images and/or the number of images within the plurality of images based on the motion history.
12. A method according to claim 11, further comprising:
storing a predetermined frequency threshold and a motion frequency of the scene,
determining whether the motion frequency of the scene is lower than the predetermined frequency threshold, and if so,
decreasing the number of images in each series.
13. A method according to any of claims 9-12, wherein performing an analysis comprises extracting texture information from images in the sequence of image series.
14. A method according to any of claims 9-13, further comprising outputting a determination that motion has occurred within the scene to a device.
15. Computer program instructions that, when executed on a processor, perform the method of any of claims 9-14.
PCT/IB2012/056188 2011-11-14 2012-11-06 Method and apparatus for registering a scene WO2013072807A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201161559169P 2011-11-14 2011-11-14
US61/559,169 2011-11-14

Publications (1)

Publication Number Publication Date
WO2013072807A1 true WO2013072807A1 (en) 2013-05-23

Family

ID=47278922

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2012/056188 WO2013072807A1 (en) 2011-11-14 2012-11-06 Method and apparatus for registering a scene

Country Status (1)

Country Link
WO (1) WO2013072807A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112233139A (en) * 2019-06-28 2021-01-15 康耐视公司 System and method for detecting motion during 3D data reconstruction

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7495699B2 (en) * 2002-03-27 2009-02-24 The Trustees Of Columbia University In The City Of New York Imaging method and system
US20110142369A1 (en) * 2009-12-16 2011-06-16 Nvidia Corporation System and Method for Constructing a Motion-Compensated Composite Image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7495699B2 (en) * 2002-03-27 2009-02-24 The Trustees Of Columbia University In The City Of New York Imaging method and system
US20110142369A1 (en) * 2009-12-16 2011-06-16 Nvidia Corporation System and Method for Constructing a Motion-Compensated Composite Image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
RADKE R J ET AL: "Image Change Detection Algorithms: A Systematic Survey", IEEE TRANSACTIONS ON IMAGE PROCESSING, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 14, no. 3, 1 March 2005 (2005-03-01), pages 294 - 307, XP002602265, ISSN: 1057-7149, DOI: 10.1109/TIP.2004.838698 *
SAND P ET AL: "Video Matching", ACM TRANSACTIONS ON GRAPHICS (TOG), ACM, US, vol. 22, no. 3, 1 January 2004 (2004-01-01), pages 592 - 599, XP007905681, ISSN: 0730-0301, DOI: 10.1145/1015706.1015765 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112233139A (en) * 2019-06-28 2021-01-15 康耐视公司 System and method for detecting motion during 3D data reconstruction

Similar Documents

Publication Publication Date Title
US10070053B2 (en) Method and camera for determining an image adjustment parameter
CN103139547B (en) The method of pick-up lens occlusion state is judged based on video signal
JP6546828B2 (en) Modify at least one parameter used in video processing
US10304306B2 (en) Smoke detection system and method using a camera
US10395498B2 (en) Fire detection apparatus utilizing a camera
EP2608529B1 (en) Camera and method for optimizing the exposure of an image frame in a sequence of image frames capturing a scene based on level of motion in the scene
US20160142680A1 (en) Image processing apparatus, image processing method, and storage medium
CN110798592B (en) Object movement detection method, device and equipment based on video image and storage medium
JP2021044599A (en) Detection device and sensor
CN107635099B (en) Human body induction double-optical network camera and security monitoring system
WO2018005616A1 (en) Smoke detection system and method using a camera
CN108830161B (en) Smog identification method based on video stream data
JP2923652B2 (en) Monitoring system
WO2013093771A1 (en) Monitoring a scene
AU2002232008B2 (en) Method of detecting a significant change of scene
JP2000184359A (en) Monitoring device and system therefor
CN106898014B (en) Intrusion detection method based on depth camera
WO2013072807A1 (en) Method and apparatus for registering a scene
AU2002232008A1 (en) Method of detecting a significant change of scene
KR20160061614A (en) Fire detection system
JP2016103246A (en) Image monitoring device
JP3933453B2 (en) Image processing apparatus and moving body monitoring apparatus
JP2007336431A (en) Video monitoring apparatus and method
US9320113B2 (en) Method of self-calibrating a lighting device and a lighting device performing the method
SE524332C2 (en) System and method for optical monitoring of a volume

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12795076

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12795076

Country of ref document: EP

Kind code of ref document: A1