WO2003052711A1 - Method and device for identifying motion - Google Patents
Method and device for identifying motion Download PDFInfo
- Publication number
- WO2003052711A1 WO2003052711A1 PCT/FI2002/001022 FI0201022W WO03052711A1 WO 2003052711 A1 WO2003052711 A1 WO 2003052711A1 FI 0201022 W FI0201022 W FI 0201022W WO 03052711 A1 WO03052711 A1 WO 03052711A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- motion
- data elements
- threshold value
- image area
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 230000001186 cumulative effect Effects 0.000 claims abstract description 13
- 230000008859 change Effects 0.000 claims description 14
- 230000035945 sensitivity Effects 0.000 claims description 13
- 238000012546 transfer Methods 0.000 claims description 3
- 229910052729 chemical element Inorganic materials 0.000 claims description 2
- 230000000051 modifying effect Effects 0.000 claims 4
- 230000000875 corresponding effect Effects 0.000 description 17
- 239000000306 component Substances 0.000 description 9
- 238000012545 processing Methods 0.000 description 7
- 230000009467 reduction Effects 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 230000008901 benefit Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000006835 compression Effects 0.000 description 3
- 238000007906 compression Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000001419 dependent effect Effects 0.000 description 2
- 238000012886 linear function Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 101100391180 Dictyostelium discoideum forG gene Proteins 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 101150036841 minJ gene Proteins 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19602—Image analysis to detect motion of the intruder, e.g. by frame subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19602—Image analysis to detect motion of the intruder, e.g. by frame subtraction
- G08B13/19606—Discriminating between target movement or movement in an area of interest and other non-signicative movements, e.g. target movements induced by camera shake or movements of pets, falling leaves, rotating fan
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19665—Details related to the storage of video surveillance data
- G08B13/19667—Details realated to data compression, encryption or encoding, e.g. resolution modes for reducing data volume to lower transmission bandwidth or memory requirements
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19678—User interface
- G08B13/19689—Remote control of cameras, e.g. remote orientation or image zooming control for a PTZ camera
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/69—Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
Definitions
- the invention relates to a method of identifying motion in successive images and to a device for identifying motion in successive images.
- a problem in such systems is the need for much manpower and, in addition, that a person can make mistakes, particularly during the dark hours at night, i.e. a crime may remain unnoticed or be noticed too late to be stopped. Furthermore, motion is difficult to distinguish in images shot during the dark time.
- a solution is to illuminate the targets to be monitored, but it increases costs in the form of lighting devices and electricity consumed.
- a solution to the problems is to use motion detectors placed in the area to be monitored.
- the problem is the limited coverage of the detectors, wherefore quite a large number of them have to be placed in the areas monitored, which causes extra costs.
- the object of the invention is to provide an improved method of identifying motion in successive images and an improved device for identifying motion in successive images.
- a method comprising: generating difference data descriptive of the difference in size between each luminance pixel of the previous image and the corresponding luminance pixel of the present image; adding the generated difference data to a cumulative motion register, in which motion register a data element corresponds to each luminance pixel of an image; and identifying such data elements as motion whose value exceeds a predetermined threshold value.
- a device comprising: means for generating difference data descriptive of the difference in size between each luminance pixel of the previous image and the corresponding luminance pixel of the present image; means for adding the generated difference data to a cumulative motion register, in which motion register a data element corresponds to each luminance pixel of an image; and means for identifying such data elements as motion whose value exceeds a predetermined threshold value.
- a device configured to: generate difference data descriptive of the difference in size between each luminance pixel of the previous image and the correspond- ing luminance pixel of the present image; add the generated difference data to a cumulative motion register, in which motion register a data element corresponds to each luminance pixel of an image; and identify such data elements as motion whose value exceeds a predetermined threshold value.
- the invention is based on analyzing successive images generated by a surveillance camera by analyzing the differences between their luminance pixels, enabling automatic identification of motion.
- the procedure slightly resembles motion estimation used in video coding, but the motion reg- ister employed is cumulative, allowing a person who moves a short time and then stops to also be detected in the image.
- Motion identification performed in this manner is also significantly faster than traditional motion estimation, since the source of the motion is not interesting; instead, the interest in the invention is only in motion identification.
- the method and device of the invention provide a plurality of advantages. The method enables the implementation of a safety camera system, whose control room requires less manpower than previously. Furthermore, the work of a person working in the control room is facilitated because the system automatically detects motion, and the system is also able to auto- matically give an alarm.
- the method also enables the analysis of tapes recorded by a monitoring camera, the aim being to identify the interesting points in the tapes, i.e. the points including motion.
- the embodiments of the method also have other advantages, which will be described in detail below.
- Figure 1 is a block diagram of a device for identifying motion in successive images
- Figure 2 is a flow diagram illustrating a method of identifying motion in successive images
- Figure 3 illustrates the processing of a luminance pixel in a motion register
- Figures 4A, 5A, 6A and 7A show four images selected from a se- quence of successive images
- Figures 4B, 5B, 6B and 7B show the motion identified in Figures 4A, 5A, 6A and 7A as a black-and-white image
- Figures 4C, 5C, 6C and 7C show an image zoomed to the motion identified in Figures 4A, 5A, 6A and 7A;
- Figure 8 illustrates the effect of moving a camera;
- Figure 9 illustrates the effect of the use of an optical zoom in a camera
- Figure 10 illustrates the processing of a luminance pixel in a motion register in the example of Figures 4A, 5A, 6A and 7A.
- the device for identifying motion in successive images will be described with reference to Figure 1.
- the basic principle of the device is simple; it is configured to generate difference data by subtracting, from each luminance pixel in the previous image, the corresponding luminance pixel in the present image.
- the device is also configured to add the generated difference data to a cumulative motion register.
- a data element corresponds to each luminance pixel in the image.
- the device is configured to identify such data elements as motion whose values exceed a predetermined threshold value.
- the device is able to communicate with one or more image sources, in our example with two video cameras 158, 160. Any known device generating successive images 100A, 100B can be used as the image source.
- the information obtained from the image source has to contain luminance information, either as separate luminance pixels or otherwise enable the separation or conversion of the luminance pixels from the information coming from the image source.
- the cameras 158, 160 can be polled one at a time until motion is detected somewhere. If motion is detected in the images of several cameras, the image to be transmitted may be selected for instance based on the motion area, or images can be sent from these cameras alternately in a sequence of a given length.
- Several speeds may be used in image processing. For example, if no motion is detected, the speed 1 image/second is used, whereas 15 images/second are used during pre-alarm and alarm.
- the read speed is increased, i.e. a shift occurs to what is known as a pre-alarm state. If the motion is continuous, a shift occurs to an alarm state, and the system starts to send images and the zoom may also then be turned on. Images of the pre-alarm state can be stored and sent if the alarm state has to be entered. When the motion stops, the system returns to the pre-alarm state, where it remains for a predetermined time. If still no motion is detected, the alarm can be switched off and the read speed dropped to minimum.
- Coding successive images is used to reduce the amount of data, allowing it to be stored more efficiently in some memory means or to be transferred using a telecommunication connection.
- An example of a video coding standard is MPEG-4 (Moving Pictures Expert Group).
- the image sizes employed vary, e.g. the cif size is 352 x 288 pixels and the qcif size is 176 x 144 pixels.
- a single image is divided into blocks including information on brightness, colour and location.
- the data in the blocks are compressed by blocks using the desired coding method. Compression is based on deleting less significant data. Compression methods are mainly divided into three different categories: spectral redundancy reduction, spatial redundancy reduction, and temporal redundancy reduction.
- the YUV colour model is applied, for example.
- the YUV model utilizes the fact that the human eye is more sensitive to changes in luminance, i.e. brightness, than to changes in chrominance, i.e. colour.
- the YUV model includes one luminance component (Y) and two chrominance components (U and V, or C and C r ).
- a luminance block according to the H.263 video coding standard is 16 x 16 pix- els, and both chrominance blocks, which cover the same area as the luminance block, 8 x 8 pixels.
- the combination of one luminance block and two chrominance blocks is called a macro block.
- Each pixel, both in the luminance and chrominance blocks can receive a value between 0 and 255, i.e. eight bits are needed to present one pixel.
- luminance pixel value 0 means black and value 255 white.
- Successive images 100A, 100B coming to a device may thus conform to the YUV model, wherein luminance pixels constitute a separate component, for instance 352 x 288 luminance pixels in a cif-sized image.
- the device has to have an image source selector 102 for selecting the information stream to be processed with the device.
- image source selector 102 is needed if there is only one image source.
- reference number 102 also denotes the control circuit of an image source, for instance a camera.
- the image 104 of the selected image source is applied to a frame buffer 106, in which the luminance and chrominance components of the image are stored.
- difference data descriptive of the difference in size between each luminance pixel of the previous image and the corresponding luminance pixel of the present image are generated. This difference data may be generated for instance by subtracting, from each luminance pixel in a previous image 114, the corresponding luminance pixel of a present image 112, or by some other mathematical operation giving the same result.
- the difference data 118 generated are then added to a cumulative motion register 126.
- the cumulative motion register 126 is a memory whose size is the height of the image times the width of the image.
- a feasible read area is [-255,255] if an image in the YUV format is used.
- the memory may also be smaller, but in that case the data stored therein have to be pre- processed.
- the device is configured to reduce the values of the data elements in the motion register that are unequal to zero towards zero by a predetermined amount in association with each addition of difference data.
- this is represented by block 120, to which the generated difference data 118 are applied, and in which a predetermined number is subtracted from each data element of the difference data applied 125 from the motion register 126, and the obtained difference, to which the difference data 124 are added, is then applied to the motion register 126 as the new value of the data element.
- the predetermined number may be for instance one or another number greater than one.
- the subtraction may naturally also be carried out by adding the difference data to the motion register 126, in which the predetermined number is then subtracted from each data element that deviates from zero.
- the device includes a control block 136 for controlling the operation of the different parts of the device.
- a predetermined number 122 is applied to block 120 from the control block 136, the number being subtracted from each data element.
- the control block 136 also controls 140 the image source selector 102.
- the control block 136 scans 128 the motion register 126 in a predetermined manner, for instance by rows or columns. Such data elements 126 in the motion register whose values exceed a predetermined threshold value are then identified as motion in the control block 136.
- the control block identifies motion by blocks, for instance by luminance macro blocks. The blocks may also partially overlap.
- motion is identified in a block of an image only if a predetermined number of data elements corresponding to said block in the motion reg- ister 126 exceeds the predetermined threshold value.
- Figure 3 illustrates the processing of a luminance pixel in the motion register 126.
- the example is based on prototypes and experiments made by the applicant. The basis of the example is a series of events shot with a video camera, the series showing a light table. A person arrives, and leaves a black briefcase on the table.
- the horizontal axis in Figure 3 shows numbers 1 to 150 of successive images, and the vertical axis the value range of both the luminance pixel and the data element in the motion register. Changes in the value of the luminance pixel under study are shown by curve 300.
- the variation in the value of the data element corresponding to said luminance pixel in the motion register 126 is shown by curve 302.
- the luminance pixel selected for study is part of the light table.
- the slight variation in the pixel value is noise.
- the luminance pixel is part of the black briefcase that is placed on the table.
- the luminance pixel is part of the black briefcase left on the table.
- this predetermined amount is one. A reduction in the value of a data element results in that in image 137 the entire briefcase has disappeared from the motion register 126, i.e. merged into the background.
- a suitable threshold value for removing noise, but not motion could be for instance between 10 and 15, in which case motion would be detected about at image 130.
- the control block 136 controls 144 the frame buffer 106 to apply the present image to a zooming block 132 in the area 130 to be zoomed.
- the control data 144 contains information about which area to be zoomed in the image is sent as image 130 to the zooming block 132.
- the control data 134 are applied to the zooming block 132, which control data indicate the ratio between the sizes of images received and transmitted. Said ratio is 1 :1 if zooming is not used.
- Automatic zooming to motion is an optional function, which is turned on via a user interface in the device (not shown in Figure 1).
- the zooming block 132 zooms to the detected motion, for instance by interpolation of the zoomed area to the size of the original image by using known interpolation methods.
- motion identification an area where there is motion is detected. This area is outlined such that the ratio of height and width always re- mains the same compared with the original image.
- the area to be zoomed should be slightly larger than the area where there is motion.
- the enlargement should also have a maximum, which depends on the image size used and the accuracy of the camera. For example, a 100-fold enlargement on a qcif-sized image is no longer reasonable.
- control block 136 and/or the zooming block 132 stores the zooming point in memory in order to control the change in the area to be zoomed.
- Information on the zooming point can be utilized by allowing only a change of a predetermined size in the zooming point between two successive images.
- control block 136 and/or the zooming block 132 may store the zooming ratio in memory.
- Information on the zooming point can be utilized by allowing only a change of a predetermined size in the zooming point between two successive images.
- the image becomes more pleasant to look at, since the image is not changed too rapidly based on motion; instead, in steps of a given size, for example.
- Adjusting the speed of change is an optimization task between the monitoring speed of detected motion, image quality, and information content.
- a zoomed or non-zoomed image 142 is then applied to an optional encoding block 150, with which the image may be encoded, if desired.
- Encoding a zoomed image brings about the advantage that the result of the encoding improves, since redundant information is removed from the image, whereby for example a burglar in the zoomed image, can be identified more easily.
- the encoding block 150 may be for instance some known video encoder, e.g. an mpeg4 encoder.
- the image 142 may also be applied for storage to a memory means included in the device or to a memory means connected to the device, e.g. a computer hard disk.
- the image 142 may also be applied to a viewing apparatus, e.g. a monitor; in this case, depending on its presentation format, the image may have to be converted.
- the device also includes a block 146 for generating a black-and-white image 148 from the data elements included in the motion register 126.
- the data ele- ments can be applied 144 to block 146 from the control block 136, since the control block 136 reads 128 the data elements from the motion register 126, but block 146 could naturally read them also directly from the motion register 126.
- a black-and-white image may be implemented for instance by presenting the data elements exceeding a predetermined threshold value in the black- and-white image in white and the other data elements in black, or by presenting the data elements exceeding a predetermined threshold value in black and the other data elements in white.
- This embodiment brings about the advantage that the motion detected is much more easy to detect in a black-and-white im- age 148 presenting motion than in the original image 100A, 100B.
- the threshold value used for generating a black-and-white image is the same value as for motion identification, i.e. for instance 10 to 15, then the person's motion and the remaining of the briefcase on the table are detected in the black-and-white image about up to image 130.
- a smaller number can be set as the threshold value of a black-and-white image, whereby motion is visible longer in the black-and-white image generated.
- the predetermined threshold value used in motion identification is adjusted to adjust the sensitivity of motion identifica- tion.
- this can be indicated by generating a threshold value for the present image in the control block 136 by using a magnitude descriptive of the average value of the luminance pixels of the present image and the threshold value of the previous image.
- p is a luminance pixel
- n x m is the size of the image. All luminance pixels of the image do not have to be used in the calculation of the average.
- Formula 2 uses a linear function, wherein p and q are constants, but a non-linear function may also be used.
- the threshold value may also be weighted, r and s being constants, and by changing them the threshold value, and thus the sensitivity of the device to large variations in brightness, can be adjusted.
- the user interface of the device may include a controller for stepless control of the threshold value to achieve the desired sensitivity.
- Weighting is worthwhile in order for the system not to react to large rapid varia- tions in brightness, for instance a change in brightness caused by a lightning or a cloud.
- the average does not have to be used in Formulae 2 and 3, but some other statistical magnitude descriptive of the average value can also be used. Noise can also be measured from random pixels, and adjust the sensitivity accordingly.
- the device is configured, for instance by the control block 136 being commanded, to move the luminance pixels of the previous image to the luminance pixels showing the same image area in the present image, to move the corresponding data elements in the motion register 126 to said same image area, and to zero the data elements corresponding to the luminance pixels in the present image only in the motion register 126, if the image area in the present image has moved relative to the image area of the previous image, i.e. the camera 158, 160 that generated the pictures has been moved.
- the device is configured, for instance by the control block 136 being commanded, to modify the image area in the previous image to correspond to the image area of the present image by interpolating therein the missing luminance pixels corresponding to the image area of the present image, and to modify the image area of the motion register 126 to correspond to the image area of the present image by interpolating therein the missing data elements corresponding to the image area of the present image, if the image area in the present image is zoomed optically relative to the image area of the previous image, i.e. optical zooming is used in the camera 158, 160 that generated the pictures.
- the device is configured, for instance by the control block 136 commanding 140 the selection block 102, to control 138 the camera 160 that generated the images to move in the direction of the motion identified in the image.
- the camera 160 includes for instance an electric motor for turning the camera 160 in the direction of the motion.
- the control command 138 indicates the magnitude of the required motion for in- stance as degrees.
- the device is configured, for instance by the control block 136 commanding 140 the selection block 102, to control 138 the camera 160 that generated the images to zoom optically to the motion identified in the image.
- the camera 160 includes an electrically controlled optical zoom for zooming the image generated by the camera 160 to the motion.
- the control command 138 indicates the required change in the zooming ratio.
- the device also includes a block 154 for giving an alarm if motion is identified.
- reference number 156 denotes an alarm.
- the same reference numbers 154, 156 also describe an em- bodiment, wherein the device includes a block 154 for transmitting successive images, a black-and-white image generated or a zoomed image using a telecommunication connection 156. Images may be transmitted with or without an alarm. An image to be transmitted may be an image originating from the original image source, a generated black-and-white image or a zoomed image. The camera that generated the images may also have been moved or optically zoomed in the direction of the motion identified in the image.
- the telecommunication connection uses known solutions, e.g.
- the device may also be configured to send only an image wherein motion is detected via the telecommunication connection 156. It may also be automatically concluded from the sensitivity selected by the system if it is worth to transmit an image coming from a camera or a black-and- white image of an image in the motion register. A black-and-white image may be transmitted if the brightness of the area to be shot drops below a given level. In this way, the device operates in nearly the dark such that a person looking at the image finds it easier to detect motion.
- the device blocks shown in Figure 1 can be implemented as one or several application-specific integrated circuits (ASIC).
- ASIC application-specific integrated circuits
- Other implementations are also feasible, e.g. a circuit constructed from separate logics com- ponents, or a processor and software.
- a hybrid of these implementations is also feasible.
- a person skilled in the art takes into account for instance the requirements set on the size and power consumption of the device, the required processing power, manufacturing costs and production volumes.
- Figure 1 mainly de-scribes functional entities, whereby the parts of a practical hardware implementation may deviate from what is described, since the final question is integration degree: how to implement the device for identifying motion in gration degree: how to implement the device for identifying motion in successive images to implement the requested functionality as efficiently as possible and at reasonable cost in said application.
- the above device is applicable to a plurality of purposes. It may be used to study stored image material, and search the material automatically for points where there is motion.
- the device may be placed in connection with a surveillance camera or in a central control room.
- a very inexpensive version can be made of the device.
- the device also includes an inexpensive camera and can be coupled to the mains current.
- the device also includes parts enabling it to operate as a subscriber terminal in a mobile network. This allows the device to be used to guard for instance its owner's country house, and the device sends image and/or an alarm to the owner's subscriber terminal as soon as it detects motion.
- the device thus operates as a burglar alarm or a surveillance device for checking the situation at the country house.
- other sensors for instance a fire alarm, can be coupled to the device.
- a person skilled in the art is able to think of also other applications for the basic device described.
- the image source to be used is selected, e.g. one camera out of several cameras. If there is only one camera, 202 need not be executed.
- an image is read from the camera in 204.
- the image should be such that the luminance component can be read directly from it, or such that the luminance component can be generated based on information included in the image.
- the luminance com- ponent is stored in memory.
- the values of the previous image and the motion register are modified in the above manner in order for motion identification to operate correctly, so that the motion caused by the camera moving or the optical zooming would not be identified as motion.
- the process then continues from 240 to 208, where the following image is read from the camera.
- a check is made to see if the camera employed needs to be changed. If it does not have to be changed, 234 is entered, from where the operation continues in the above-described manner. If the camera is changed to another, 228 is entered, where the motion register is emptied, so that the motion register values of the previous camera do not confuse the operation in the new camera.
- Figure 2 does not show the end of the execution of the method, since, in principle, it can be ended at any point. A natural ending point is when the desire is not to study any more successive images.
- the device of the above type is applicable to the execution of the method, but also other kinds of devices are suited to the execution of the method.
- the attached dependent method claims describe the preferred embodiments of the method. Their operation was described above in connection with the device, and the description is therefore not repeated herein.
- Figures 4A, 5A, 6A and 7A shows four images, selected from a sequence of successive images. Since all images of the image sequence cannot be presented herein, four representative images have been selected for this purpose.
- a camera shoots a table in the corner of a room.
- Figure 4A shows, there is a radio cassette recorder on the table.
- Figure 10 illustrates the processing of a luminance pixel in the motion register in connection with the example of Figures 4A, 5A, 6A and 7A.
- Figure 10 is drawn up using the same principle as Figure 3, i.e. the horizontal axis shows numbers 1 to 30 of successive images and the vertical axis the value range of the luminance pixel and the data element in the motion register.
- the variations in the value of the luminance pixel under study are shown by curve 1000.
- the variation in the value of the data element corresponding to said luminance pixel in the motion register is shown by curve 1002.
- the luminance pixel selected for study is part of the radio cassette recorder.
- the slight variation in the pixel value is noise.
- the luminance pixel is part of the person's hand.
- the luminance pixel is part of the table.
- an embodiment is in use in our example, wherein the values of data elements included in the mo- tion register and deviating from zero are reduced towards zero by a predetermined amount in connection with each addition of difference data. In our example, this predetermined amount is five.
- Figures 4B, 5B, 6B and 7B show how the motion detected in Figures 4A, 5A, 6A and 7A looks like as a black-and-white image. The selection in our example is that motion is presented in black and immobility in white. [0070] Since Figure 4A shows no motion, Figure 4B is entirely white.
- Figure 6B shows that the person is holding the radio cassette recorder.
- the removal of the radio cassette recorder is visible as a motion at the point where it was.
- Figure 7B after the person's motion has disappeared, the removal of the radio cas- sette recorder is still visible.
- the threshold value for motion identification, the threshold value for generation of a black- and-white image and the predetermined amount to be used in the reduction in connection with the addition of difference data can be adjusted to adjust the time during which the motion is visible in the image.
- the black-and- white image sequence shows the person's motion as a black figure that approaches the table, grabs the radio cassette recorder, and exits the corner of the room from the image area shot by the camera.
- a reduction in the value of a data element results in the motion identified in the image finally disappearing from the black-and-white image, i.e. when the values of the data elements included in the motion register are sufficiently reduced, the black radio cassette recorder also disappears from the black-and-white image.
- Figures 4C, 5C, 6C and 7C show how an image zoomed to the motion identified in Figures 4A, 5A, 6A and 7A looks like. [0074] A comparison of Figures 4A and 4C shows that in Figure 4C, no zooming is yet used on the motion, since, in accordance with Figure 4B, no motion is yet identified therein.
- Figure 5C shows how the image is zoomed to the motion identified in the image, i.e. towards the person who walked into the image. [0076] In accordance with Figure 6C, zooming has been continued.
- zooming is directed towards the removed radio cassette recorder, since, in accordance with Figure 7B, the removal remained visible as motion.
- the image area can be restored to normal, for instance by stepless zooming back to the original image area.
- the effect of moving the camera will be illustrated in Figure 8.
- the camera that generated the images is guided to move in the direction of the motion identified in the image.
- the previous image is outlined by frame 800, and the present im- age by frame 802.
- the obliquely checked area 804 is the old image area that is omitted in the study performed to identify motion.
- the horizontally checked area 806 is an area that exists only in the new image, for which area only the data elements corresponding to the luminance pixels in the present image are zeroed.
- a motion vector 808 between the previous image 800 and the present image 802 determines the direction in which the camera was moved.
- the common area 810 of the previous image 800 and the present image 802 is an area, for which the luminance pixels of the previous image 800 are transferred to the point of the luminance pixels included in the present image 802 and describing the same image area, and, similarly, the corresponding data elements in the motion register are transferred to the point of said same image area.
- the data elements in the motion register and the luminance pixels of the previous image 800 in the frame buffer are moved a distance specified by the motion vector 808 in the opposite direction.
- Figure 9 illustrates the effect of the use of an optical zoom in a camera in the above methods.
- the camera that generated the images is directed to zoom optically in the direction of the motion identified in the image.
- the previous image is outlined by frame 900, and the present image by frame 902.
- the checked area 904 is the area that is visible only in the previous image 900.
- the common part of the previous image 900 and the present image 902 is area 906.
- the image area 900 in the previous image is modified to correspond to the image area in the present image 902 by omitting area 904 from it, and by interpolating the missing luminance pixels that correspond to the image area in the present image 902 therein, and the image area in the motion register is modified to cor- respond to the image area in the present image 902 by interpolating the missing data elements corresponding to the image area in the present image 902 therein.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Theoretical Computer Science (AREA)
- Closed-Circuit Television Systems (AREA)
- Studio Devices (AREA)
- Image Analysis (AREA)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2002350782A AU2002350782A1 (en) | 2001-12-18 | 2002-12-13 | Method and device for identifying motion |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FI20012497 | 2001-12-18 | ||
FI20012497A FI112018B (fi) | 2001-12-18 | 2001-12-18 | Menetelmä ja laite liikkeen tunnistamiseksi |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2003052711A1 true WO2003052711A1 (en) | 2003-06-26 |
Family
ID=8562510
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/FI2002/001022 WO2003052711A1 (en) | 2001-12-18 | 2002-12-13 | Method and device for identifying motion |
Country Status (3)
Country | Link |
---|---|
AU (1) | AU2002350782A1 (fi) |
FI (1) | FI112018B (fi) |
WO (1) | WO2003052711A1 (fi) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005027068A1 (en) * | 2003-09-12 | 2005-03-24 | Canon Kabushiki Kaisha | Streaming non-continuous video data |
GB2423661A (en) * | 2005-02-28 | 2006-08-30 | David Thomas | Identifying scene changes |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0878965A2 (en) * | 1997-05-14 | 1998-11-18 | Hitachi Denshi Kabushiki Kaisha | Method for tracking entering object and apparatus for tracking and monitoring entering object |
JP2000059796A (ja) * | 1998-06-03 | 2000-02-25 | Matsushita Electric Ind Co Ltd | 動き検出装置及び動き検出方法及び動き検出のプログラムを記録した記録媒体 |
WO2001048719A1 (en) * | 1999-12-23 | 2001-07-05 | Wespot Ab | Surveillance method, system and module |
WO2001048696A1 (en) * | 1999-12-23 | 2001-07-05 | Wespot Ab | Method, device and computer program for monitoring an area |
JP2001298728A (ja) * | 2000-04-12 | 2001-10-26 | Meidensha Corp | 遠方監視システム及び画像符号化処理方法 |
WO2002051154A1 (fr) * | 2000-12-19 | 2002-06-27 | Ooo 'mp 'elsys' | Procede et dispositif de transformation d'images |
-
2001
- 2001-12-18 FI FI20012497A patent/FI112018B/fi not_active IP Right Cessation
-
2002
- 2002-12-13 AU AU2002350782A patent/AU2002350782A1/en not_active Abandoned
- 2002-12-13 WO PCT/FI2002/001022 patent/WO2003052711A1/en not_active Application Discontinuation
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0878965A2 (en) * | 1997-05-14 | 1998-11-18 | Hitachi Denshi Kabushiki Kaisha | Method for tracking entering object and apparatus for tracking and monitoring entering object |
JP2000059796A (ja) * | 1998-06-03 | 2000-02-25 | Matsushita Electric Ind Co Ltd | 動き検出装置及び動き検出方法及び動き検出のプログラムを記録した記録媒体 |
WO2001048719A1 (en) * | 1999-12-23 | 2001-07-05 | Wespot Ab | Surveillance method, system and module |
WO2001048696A1 (en) * | 1999-12-23 | 2001-07-05 | Wespot Ab | Method, device and computer program for monitoring an area |
JP2001298728A (ja) * | 2000-04-12 | 2001-10-26 | Meidensha Corp | 遠方監視システム及び画像符号化処理方法 |
WO2002051154A1 (fr) * | 2000-12-19 | 2002-06-27 | Ooo 'mp 'elsys' | Procede et dispositif de transformation d'images |
Non-Patent Citations (2)
Title |
---|
DATABASE WPI Derwent World Patents Index; Class G06, AN 2000-243113 * |
DATABASE WPI Derwent World Patents Index; Class H04, AN 2002-052032 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005027068A1 (en) * | 2003-09-12 | 2005-03-24 | Canon Kabushiki Kaisha | Streaming non-continuous video data |
US7683940B2 (en) | 2003-09-12 | 2010-03-23 | Canon Kabushiki Kaisha | Streaming non-continuous video data |
US8599277B2 (en) | 2003-09-12 | 2013-12-03 | Canon Kabushiki Kaisha | Streaming non-continuous video data |
GB2423661A (en) * | 2005-02-28 | 2006-08-30 | David Thomas | Identifying scene changes |
Also Published As
Publication number | Publication date |
---|---|
FI112018B (fi) | 2003-10-15 |
AU2002350782A1 (en) | 2003-06-30 |
FI20012497A0 (fi) | 2001-12-18 |
FI20012497A (fi) | 2003-06-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7124427B1 (en) | Method and apparatus for surveillance using an image server | |
AU701222B2 (en) | Video surveillance system | |
US10127452B2 (en) | Relevant image detection in a camera, recorder, or video streaming device | |
CA2226324C (en) | Video surveillance system using camera control parameters to optimize motion vector processing | |
US8026945B2 (en) | Directed attention digital video recordation | |
JP2004023373A (ja) | 画像処理装置及びその方法、並びにコンピュータプログラム及びコンピュータ可読記憶媒体 | |
CN100446568C (zh) | 视频监控设备及方法 | |
KR100834465B1 (ko) | 움직임감지를 이용한 방범시스템 및 방법 | |
KR20090063120A (ko) | 결합 이미지를 생성하는 방법 및 장치 | |
JPH11509701A (ja) | ビデオ圧縮装置 | |
CN100539687C (zh) | 具备保护固定目标功能的智能网络摄像机 | |
US20110129012A1 (en) | Video Data Compression | |
JP2005175970A (ja) | 撮像システム | |
KR20040084517A (ko) | 동영상 휴대 전화기를 이용한 보안 감시 시스템 | |
WO2003052711A1 (en) | Method and device for identifying motion | |
JP2009100259A (ja) | 監視カメラおよび画像監視システム | |
KR100420620B1 (ko) | 객체기반 영상 감시시스템 | |
JP2003284062A (ja) | 監視システム | |
Tsifouti et al. | A methodology to evaluate the effect of video compression on the performance of analytics systems | |
Vítek et al. | Video compression technique impact on efficiency of person identification in CCTV systems | |
FI112017B (fi) | Menetelmä ja laite automaattiseen zoomaukseen | |
KR20020071567A (ko) | 움직이는 물체를 효율적으로 추적하는 감시용 카메라시스템 및 그 제어 방법 | |
CA2242322C (en) | Digital video security system | |
KR100290607B1 (ko) | 자동물체인식및추적촬영기능의보안방법 | |
JPH07184206A (ja) | 画像符号化装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SC SD SE SG SK SL TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LU MC NL PT SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
122 | Ep: pct application non-entry in european phase | ||
NENP | Non-entry into the national phase |
Ref country code: JP |
|
WWW | Wipo information: withdrawn in national office |
Country of ref document: JP |