WO2007036873A2 - Motion detection device - Google Patents

Motion detection device Download PDF

Info

Publication number
WO2007036873A2
WO2007036873A2 PCT/IB2006/053488 IB2006053488W WO2007036873A2 WO 2007036873 A2 WO2007036873 A2 WO 2007036873A2 IB 2006053488 W IB2006053488 W IB 2006053488W WO 2007036873 A2 WO2007036873 A2 WO 2007036873A2
Authority
WO
WIPO (PCT)
Prior art keywords
video frames
processing
velocity
sequence
video
Prior art date
Application number
PCT/IB2006/053488
Other languages
French (fr)
Other versions
WO2007036873A3 (en
Inventor
Harold G. P. H. Benten
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Priority to US12/067,962 priority Critical patent/US8135177B2/en
Priority to EP06821148A priority patent/EP1932352A2/en
Priority to JP2008531874A priority patent/JP2009510827A/en
Publication of WO2007036873A2 publication Critical patent/WO2007036873A2/en
Publication of WO2007036873A3 publication Critical patent/WO2007036873A3/en

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/052Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/223Analysis of motion using block-matching
    • G06T7/238Analysis of motion using block-matching using non-full search, e.g. three-step search
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the invention refers to the field of video processing and provides a device, a corresponding method and a computer program product for extracting motion information from a sequence of video frames.
  • the invention can be used in surveillance applications, e.g. traffic surveillance applications, and for the detection of an intrusion into buildings or premises.
  • Motion information can be of great importance in a number of applications, including traffic monitoring, tracking people, security and surveillance.
  • Digital video processing evolved tremendously over the last couple of years.
  • True motion estimation is a video processing technique applied in high-end TV sets. These TV sets use a frame rate of 100 Hz instead of the standard 50 Hz. This makes it necessary to create new video frames by means of interpolation. For doing that with a high frame quality the motion of pixel blocks within the two dimensional frames is estimated. This can be done by a 3D recursive search block matching algorithm as described in the document of Gerard de Haan et al, "True motion estimation with 3D-recursive search block matching", IEEE transactions on circuits and systems of video technology, volume 3, number 5, October 1993. This algorithm subdivides a frame into blocks of 8 x 8 pixels and tries to identify the position of this block in the next frame.
  • US 6,757,328 Bl discloses a method for extracting motion information from a video sequence.
  • the video sequence used by this US patent already contains motion vectors inherent to the video stream, e.g. an MPEG stream.
  • the motion vectors are extracted from the encoded video stream.
  • These motion vectors in the MPEG stream have been created by the encoding process, such that they do not represent a true motion.
  • the MPEG stream contains motion vectors pointing to the left although the object might carry out a movement to the right.
  • a filtering step is carried out to remedy the poor quality of the motion vectors. After the filtering step the authors of this US patent use the motion information for traffic surveillance applications.
  • Another object of the invention is to carry out an extraction of motion information which is simple and highly efficient such that a real-time processing is possible.
  • the above mentioned object is solved by a device for extracting motion information from a sequence of video frames which comprises a digital video camera for grabbing the video frames. Furthermore, the device comprises a processing unit for processing the video frames provided by the video camera, whereby the processing unit is adapted to use a 3D recursive search block algorithm to determine whether the video frames show an object or a person which is moving.
  • extraction of motion information is done by a video sequence which is not encoded. That means that if a video sequence is already encoded, e.g. because it is an MPEG video stream, it needs to be decoded first. The reason is that the algorithm for extracting motion information, which will be discussed in detail below, operates on the pixels of the video frames.
  • the digital video camera grabs a sequence of video frames, and the processing unit processes the digital video frames from the digital video camera in order to extract a motion information. This processing is done by using a recursive search block algorithm to determine whether the video frames show an object or person which is moving.
  • the method can be carried out by using a computer program product using the underlying algorithm.
  • the computer program product comprises a computer readable medium, having thereon computer program code means, when said program is loaded, to make the computer executable for determining whether the video frames show an object or a person which is moving, or generally for carrying out the method which will be explained below in more detail.
  • a sequence of video frames is provided by digital video camera of arbitrary type, e.g. a CMOS, CCD, or infrared video camera which is fixed or which is moving.
  • digital video camera is not part of the present invention, such that it does not need further explanation.
  • the processing unit may be a) processor and a corresponding computer program.
  • the processor might be a Trimedia processor or an Xetal processor of Philips, e.g. a Philips PNX 1300 chip comprising a TM 1300 processor.
  • a dedicated chip for example an ASIC or a FPGA c) an integral path of an existing chip of the video camera hardware, or d) a combination of the possibilities mentioned above.
  • the preferred choice depends on system aspects and on product requirements.
  • a preferred embodiment of the processing unit uses an extra card to be inserted in a digital video camera having a size of 180 mm x 125 mm and comprising a Philips PNX1300 chip, which itself comprises a Philips TN1300 processor. Furthermore, the card uses 1 MB of RAM for two frame memories and one vector memory.
  • the processing unit uses a 3D recursive search block (3DRS) algorithm to extract motion information from the video frames.
  • the algorithm works in the way as described by Gerard de Haan et al, "True motion estimation with 3D recursive search block matching", IEEE transactions on circuits and systems of video technology, volume 3, number 5, October 1993, to which this application explicitly refers to and which is incorporated by reference.
  • the device according to the invention has the advantage that it can be universally applied to video sequences which are not encoded. There is thus no need to encode the video sequences prior to processing them, and it is not necessary to make a financial investment into corresponding software or hardware.
  • Another advantage of the device is that the motion vectors calculated by the 3DRS algorithm represent the true motion of an object or person, such that there is no need to postprocess acquired motion vectors in order to improve their quality to an acceptable level. This is however important for the application of the device: if the device is used for speed measurements the reliability and the accuracy of the speed values is high when the motion vectors represent true motion, and is lower when a postprocessing of the motion vectors is necessary.
  • Still another advantage of the device is that the 3DRS algorithm is extremely efficient, even in comparison to other known block matching algorithms, such that the design of a device which is operating in real-time becomes straightforward. In doing that there is a high degree of freedom as far as the choice of the processing unit is concerned, such that the execution of the 3DRS algorithm can be implemented in hardware as well as in software.
  • the processing unit is adapted to determine the velocity of the object or person captured by the video frames of the video sequence. This can be done as follows.
  • the 3DRS algorithm processes the complete frame in blocks of pixels, e.g. 8 x 8 pixels per block.
  • the 3DRS-algorithm outputs one motion vector for each 8 x 8 block of pixels.
  • Each vector has an x- and a y- component, whereby x and y represent a two-dimensional Cartesian coordinate system with a horizontal x-axis pointing to the right, and a vertical y-axis pointing to the top, cf. fig. 3.
  • the absolute value of the motion vector represents the velocity measured in pixels or in fractions of pixels, e.g. in quarter pixels.
  • the conversion of the motion vectors into actual speeds or velocities is as follows. In the first step the x- and y- component is used to calculate the length of a motion vector, denoted by veclength , in the direction of the motion which is given by
  • the velocity in m/s is expressed in km/h or miles/h for easier interpretation.
  • the conversion factor is determined only once when the device is calibrated. Its value depends on the location of the object, e.g. a vehicle, in the frame. Each location has its own conversion factor, whereby its value can be extracted from information present in the frame itself. This can be done when a known distance, e.g. in meters, is measured in pixels. An example would be to measure the distance, e.g. in meters, between adjacent lane marks in the middle of the road and comparing it with the corresponding distance in pixels. Other objects which can be used for that purpose are the distance between two objects next to the road, the vehicles themselves etc.
  • the velocity determined in this way is the average velocity between two frames.
  • the expression velocity is used synonymously to the expression speed within this description.
  • the measured velocity is in a good approximation the current velocity at a given time. It is however also possible to calculate the velocity between a multitude of two subsequent frames in order to carry out a velocity tracking from frame to frame. This in turn opens the way to calculate the average value of these velocity values.
  • the processing unit is adapted to determine simultaneously the velocity of a multitude of objects or persons.
  • the 3DRS algorithm processes whole frames such that all objects or persons captured by the frames and moving within these frames are processed. This makes it possible to use the invention for traffic surveillance applications, whereby the velocity of a multitude of vehicles should be checked, and which should preferably be checked simultaneously to efficiently control whether speed limits are obeyed.
  • Using the invention it is possible to differentiate whether the vehicles approach the camera or whether the vehicless move away from the camera.
  • the processing system is located in the housing of a video camera.
  • the system becomes an embedded system which is easy to carry and easy to use.
  • the hardware requirements for that purpose strongly depend on the application, and on the desired accuracy of the device.
  • the device may comprise a mainboard having a size of 180 x 125 mm having a Philips
  • PNX1300 chip comprising a Philips TM1300 processor, and having 1 MB RAM.
  • This extra card can be integrated into the video camera to monitor traffic on motorways.
  • hardware requirements are lower for devices designed to check whether a person is intruding a building or some premises. In the latter example a low resolution camera is sufficient, such that hardware requirements are lower.
  • the processing system is implemented as a real-time system. Achieving a real-time implementation depends on the capabilities of the hardware. Even existing hardware, such as a Philips TM1300 processor, can guarantee that the 3DRS algorithm works in real-time such that there is no need to store large amounts of data for offline processing. The underlying reason is that the 3DRS algorithm is extremely efficient and robust, requiring only 7 to 10 operations per pixel depending on the actual implementations and requirements.
  • the processing system is adapted to indicate the position of a moving object or a moving person.
  • This capability is provided by post processing the multitude of motion vectors obtained by the 3DRS- algorithm.
  • a moving object e.g. a moving car on a road
  • the position of the object can be defined to be the center of said region with non- vanishing motion vectors.
  • the processing system is adapted to carry out an object recognition. Doing this means comparing the size and shape of objects in the frames by algorithms which are known in the prior art, e.g. in order to differentiate persons from vehicles, and to differentiate among vehicles, e.g. to differentiate between cars and lorries.
  • the processing system is adapted to carry out a number plate recognition.
  • the number plate recognition can be done with well known algorithms based on optical character recognition which is well known to the man skilled in the art. Number plate recognition is a useful capability of the device when the device shall be used for speed detection or for identifying vehicles which have passed red traffic lights.
  • a second aspect of the invention refers to a method for extracting motion information from a sequence of video frames.
  • a sequence of video frames is grabbed.
  • the digital video frames grabbed by the video camera are processed, whereby processing is done by using a recursive search block algorithm to determine whether the video frames show an object or person which is moving.
  • the algorithm works in the way as described by Gerard de Haan et al, "True motion estimation with 3D recursive search block matching", IEEE transactions on circuits and systems of video technology, volume 3, number 5, October 1993, to which this application explicitly refers to and which is incorporated by reference.
  • the method according to the invention has the advantage that it can be universally applied to video sequences which are not encoded. Thus the method is not encoding video sequences prior to processing them. On the contrary, if an encoded video sequence shall be processed it is necessary to decode it first, as the method uses the 3DRS algorithm processing the pixels of the frames.
  • the motion vectors calculated by the 3DRS algorithm represent the true motion of an object or person, such that there is no need to postprocess acquired motion vectors to improve their quality to an acceptable level.
  • the 3DRS algorithm is extremely efficient, even in comparison to other known block matching algorithms, such that the method is particularly fast, which makes it possible to process grabbed video sequences in real-time.
  • the method can be used for surveillance applications such as traffic surveillance.
  • Another area where the method can be used is for road rule enforcement cameras, in particular as a speed camera or red light camera.
  • Fig. 1 shows a digital video camera for extracting motion information
  • Fig. 2 illustrates the selection of locations for speed checking
  • Fig. 3 illustrates the calibration of the device for speed checking locations
  • Fig. 4 is a measurement indicating areas with non-vanishing motion vectors
  • Fig. 5 depicts a flowchart for carrying out the invention
  • Table 1 contains measurement values and conversion factors of the calibration.
  • Fig. 1 shows a device according to the invention. It comprises a digital video camera 1 having a housing 2 including a processing unit 3. Furthermore, the digital video camera has an output port 4 for communicating with an external computer (not shown), e.g. via an Ethernet cable 5. The external computer might be located in a police station. In addition, the digital video camera 1 has an optional transceiver 6 for wireless transmissions of acquired data to the remote computer.
  • the digital video camera 1 was a Panasonic NV-DXl 1 OEG consumer video camera which is commercially available and which does need further explanation. This video camera 1 grabbed video frames at a frame rate of 25 Hz and outputted them via a 4-pin i.Link input/output port.
  • the outputted video sequence was transferred to a conventional notebook (not shown) and was stored in the AVI-forward at 25 Hz.
  • this compressed video format needed to be decoded first such that it was transferred to the
  • the decoded video sequence had a resolution of 720 x 576 pixels and a frame rate of 25 Hz.
  • a computer program based on a basic 3DRS-algorithm was used for processing the unencoded video sequence, without any preprocessing or postprocessing. This algorithm was executed on the notebook mentioned above. It yielded true motion vectors giving rise to a velocity values which could be trusted. Furthermore, the true motion vectors gave rise to a robust 3DRS algorithm working efficiently and thus very fast, such that the device processed the frames in real-time without preprocessing or postprocessing.
  • the first step consisted in installing the digital video camera and fixing it to the bridge over a motorway.
  • a second step it was tested whether the digital video camera generated a video sequence and thus functioned properly.
  • the notebook mentioned above was used to calibrate the device by means of an application software.
  • the device according to the invention comprised, in the framework of the feasibility study, the digital video camera and the notebook.
  • the notebook represented the processing unit comprising a processor and associated memory in the sense of alternative a) mentioned above.
  • a first calibration step consisted in selecting locations of the motorway where a speed checking should be performed. This is illustrated with the help of fig. 2.
  • Fig. 2 shows a motorway with three lanes with vehicles approaching the video camera. For each lane a measurement location 8, 9 and 10 is selected.
  • the 3DRS-algorithm will start to estimate the speed of every object as soon as it enters the frame / image, selecting the proper positions requires some care. Good positions are not too close to the borders of the image and not too far into the background.
  • the notebook served as a processing unit and used a basic 3DRS-algorithm without any preprocessing or postprocessing.
  • the algorithm processed frames and subdivided the frames into blocks of pixels, namely 8 x 8 pixels per block.
  • step 1 a grabbing of a sequence of video frames is carried out. These video frames are processed in step 6, and the results are outputted in step 5.
  • step 2 the frames are analyzed by means of a 3DRS algorithm to identify a moving object within any of the measurement locations 8, 9 or 10 shown in fig. 2.
  • a moving object e.g. a car, exists in these areas if there are pixel blocks with non- vanishing motion vectors in this region.
  • step 3 the velocity associated with this moving pixel blocks is determined and it is decided whether this velocity is too fast in comparison to an allowed value, e.g. 100 km/h for a motorway.
  • step 4 the number plate of the vehicle is extracted from the video frames in step 4. This is done by an additional computer program module as it is known in the prior art.
  • step 5 the data are outputted to an external computer which might be located in a police station.
  • the data comprise the number plate, the speed and possibly a frame/image of the vehicle driving too fast.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Traffic Control Systems (AREA)
  • Studio Devices (AREA)

Abstract

The invention refers to a device, a method and a computer program product for extracting motion information from a sequence of video frames. Existing solutions for extracting motion information from sequence of video frames need a massive computing power which make it difficult and expensive to implement a real-time system. It is therefore an object of the invention to simplify such a device and to provide a real-time embedded system. It is suggested to provide a device comprising a digital video camera 1. The video camera 1 includes a processing unit 3 for processing video frames grabbed by the video camera 1. The processing uses a 3D recursive search block matching algorithm to extract the motion information from the video frames. The device can be used for traffic surveillance applications, e.g. for determining the speed of vehicles on the streets and roads.

Description

Motion detection device
The invention refers to the field of video processing and provides a device, a corresponding method and a computer program product for extracting motion information from a sequence of video frames. The invention can be used in surveillance applications, e.g. traffic surveillance applications, and for the detection of an intrusion into buildings or premises.
Motion information can be of great importance in a number of applications, including traffic monitoring, tracking people, security and surveillance. For example, with the increasing number of vehicles on the road, many cities now face significant problems with traffic congestion. Major cities in the world now use traffic guiding systems to remedy these situations and to use existing infrastructure more efficiently. For doing that systems are necessary which monitor a multitude of vehicles simultaneously, in real-time and with low costs. Digital video processing evolved tremendously over the last couple of years.
Numerous publications have tackled the problem of detecting the movements of objects such as cars or of persons. Even for a relatively simple task such as speed estimation of vehicles existing solutions use a combination of memory intensive algorithms and/or algorithms which need a massive computing power. Algorithms being known for that purpose make use of object recognition, object tracking, or make a comparison of images taken at different moments in time. It is therefore difficult and expensive to implement a real-time system for such applications.
True motion estimation is a video processing technique applied in high-end TV sets. These TV sets use a frame rate of 100 Hz instead of the standard 50 Hz. This makes it necessary to create new video frames by means of interpolation. For doing that with a high frame quality the motion of pixel blocks within the two dimensional frames is estimated. This can be done by a 3D recursive search block matching algorithm as described in the document of Gerard de Haan et al, "True motion estimation with 3D-recursive search block matching", IEEE transactions on circuits and systems of video technology, volume 3, number 5, October 1993. This algorithm subdivides a frame into blocks of 8 x 8 pixels and tries to identify the position of this block in the next frame. The comparison of these locations makes it possible to assign a motion vector to each pixel block which comprises the ratio of the pixels placement of the block and the time between two frames. US 6,757,328 Bl discloses a method for extracting motion information from a video sequence. The video sequence used by this US patent already contains motion vectors inherent to the video stream, e.g. an MPEG stream. The motion vectors are extracted from the encoded video stream. These motion vectors in the MPEG stream have been created by the encoding process, such that they do not represent a true motion. As an example, the MPEG stream contains motion vectors pointing to the left although the object might carry out a movement to the right. In order to solve this problem a filtering step is carried out to remedy the poor quality of the motion vectors. After the filtering step the authors of this US patent use the motion information for traffic surveillance applications.
It is an object of the invention to provide a device, a method and a computer program product for extracting motion information from a sequence of video frames which can be used for video frames which are not encoded.
Another object of the invention is to carry out an extraction of motion information which is simple and highly efficient such that a real-time processing is possible.
This object and other objects are solved by the features of the independent claims. Preferred embodiments of the invention are described by the features of the dependent claims. It should be emphasized that any reference signs in the claims shall not be construed as limiting the scope of the invention. According to a first aspect of the invention the above mentioned object is solved by a device for extracting motion information from a sequence of video frames which comprises a digital video camera for grabbing the video frames. Furthermore, the device comprises a processing unit for processing the video frames provided by the video camera, whereby the processing unit is adapted to use a 3D recursive search block algorithm to determine whether the video frames show an object or a person which is moving.
According to the invention extraction of motion information is done by a video sequence which is not encoded. That means that if a video sequence is already encoded, e.g. because it is an MPEG video stream, it needs to be decoded first. The reason is that the algorithm for extracting motion information, which will be discussed in detail below, operates on the pixels of the video frames.
When operating the device the digital video camera grabs a sequence of video frames, and the processing unit processes the digital video frames from the digital video camera in order to extract a motion information. This processing is done by using a recursive search block algorithm to determine whether the video frames show an object or person which is moving.
It goes without saying that the method can be carried out by using a computer program product using the underlying algorithm. The computer program product comprises a computer readable medium, having thereon computer program code means, when said program is loaded, to make the computer executable for determining whether the video frames show an object or a person which is moving, or generally for carrying out the method which will be explained below in more detail.
A sequence of video frames is provided by digital video camera of arbitrary type, e.g. a CMOS, CCD, or infrared video camera which is fixed or which is moving. The digital video camera is not part of the present invention, such that it does not need further explanation.
The processing unit may be a) processor and a corresponding computer program. As an example, the processor might be a Trimedia processor or an Xetal processor of Philips, e.g. a Philips PNX 1300 chip comprising a TM 1300 processor. b) a dedicated chip, for example an ASIC or a FPGA c) an integral path of an existing chip of the video camera hardware, or d) a combination of the possibilities mentioned above. The preferred choice depends on system aspects and on product requirements.
A preferred embodiment of the processing unit uses an extra card to be inserted in a digital video camera having a size of 180 mm x 125 mm and comprising a Philips PNX1300 chip, which itself comprises a Philips TN1300 processor. Furthermore, the card uses 1 MB of RAM for two frame memories and one vector memory. The processing unit uses a 3D recursive search block (3DRS) algorithm to extract motion information from the video frames. The algorithm works in the way as described by Gerard de Haan et al, "True motion estimation with 3D recursive search block matching", IEEE transactions on circuits and systems of video technology, volume 3, number 5, October 1993, to which this application explicitly refers to and which is incorporated by reference.
The device according to the invention has the advantage that it can be universally applied to video sequences which are not encoded. There is thus no need to encode the video sequences prior to processing them, and it is not necessary to make a financial investment into corresponding software or hardware.
Another advantage of the device is that the motion vectors calculated by the 3DRS algorithm represent the true motion of an object or person, such that there is no need to postprocess acquired motion vectors in order to improve their quality to an acceptable level. This is however important for the application of the device: if the device is used for speed measurements the reliability and the accuracy of the speed values is high when the motion vectors represent true motion, and is lower when a postprocessing of the motion vectors is necessary.
Still another advantage of the device is that the 3DRS algorithm is extremely efficient, even in comparison to other known block matching algorithms, such that the design of a device which is operating in real-time becomes straightforward. In doing that there is a high degree of freedom as far as the choice of the processing unit is concerned, such that the execution of the 3DRS algorithm can be implemented in hardware as well as in software. According to a preferred embodiment of the invention the processing unit is adapted to determine the velocity of the object or person captured by the video frames of the video sequence. This can be done as follows. The 3DRS algorithm processes the complete frame in blocks of pixels, e.g. 8 x 8 pixels per block. The 3DRS-algorithm outputs one motion vector for each 8 x 8 block of pixels. Each vector has an x- and a y- component, whereby x and y represent a two-dimensional Cartesian coordinate system with a horizontal x-axis pointing to the right, and a vertical y-axis pointing to the top, cf. fig. 3. The absolute value of the motion vector represents the velocity measured in pixels or in fractions of pixels, e.g. in quarter pixels.
As an example it is assumed that the x- value of the motion vector is 12, and that the y- value of the motion vector is -37 for a certain position, e.g. for a block of 8 x 8 pixels in the frame. Furthermore, a quarter pixel accuracy is assumed. This means that this particular block is moving with a speed of 12 x 0.25 = 4 pixels to the right because the x- value is positive, and 37 x 0.25 = 9.25 pixels downwards because the y-value is negative. The conversion of the motion vectors into actual speeds or velocities is as follows. In the first step the x- and y- component is used to calculate the length of a motion vector, denoted by veclength , in the direction of the motion which is given by
vec length = Jvx 2 + v 2 (equation 1) (in units of pixels), whereby Vx is the x-component and whereby the v is the y-component of this velocity.
Since the frame frequency, e.g. 25 Hz, is known from the digital video camera the velocity of the object in pixels per second (pps) is calculated by means of speed _ pps = veclength * frame _ freq (equation 2) whereby frame _ freq denotes the frame frequency.
The velocity in pixels per second, denoted by speed _ pps , is converted into the actual speed in meters per second (mps), denoted by speed mps , by dividing it with a conversion factor according to speed pps speed _ mps = — ^-^ — (equation 3) conv _ factor whereby conv _ factor denotes said conversion factor responsible for converting a distance in pixels into a distance in meters. Lastly, the velocity in m/s is expressed in km/h or miles/h for easier interpretation.
The conversion factor is determined only once when the device is calibrated. Its value depends on the location of the object, e.g. a vehicle, in the frame. Each location has its own conversion factor, whereby its value can be extracted from information present in the frame itself. This can be done when a known distance, e.g. in meters, is measured in pixels. An example would be to measure the distance, e.g. in meters, between adjacent lane marks in the middle of the road and comparing it with the corresponding distance in pixels. Other objects which can be used for that purpose are the distance between two objects next to the road, the vehicles themselves etc.
The velocity determined in this way is the average velocity between two frames. The expression velocity is used synonymously to the expression speed within this description. As the time difference between two frames is very small the measured velocity is in a good approximation the current velocity at a given time. It is however also possible to calculate the velocity between a multitude of two subsequent frames in order to carry out a velocity tracking from frame to frame. This in turn opens the way to calculate the average value of these velocity values.
According to a preferred embodiment of the invention, the processing unit is adapted to determine simultaneously the velocity of a multitude of objects or persons. The 3DRS algorithm processes whole frames such that all objects or persons captured by the frames and moving within these frames are processed. This makes it possible to use the invention for traffic surveillance applications, whereby the velocity of a multitude of vehicles should be checked, and which should preferably be checked simultaneously to efficiently control whether speed limits are obeyed. Using the invention it is possible to differentiate whether the vehicles approach the camera or whether the vehicless move away from the camera. Furthermore, it is possible to monitor the velocity of vehicles on a multitude of lanes, and even to determine average velocities of the cars on the lanes. Determining the average velocity of said multitude of vehicles makes it possible to have an indicator whether there is a traffic congestion on the road.
According to a preferred embodiment of the invention, the processing system is located in the housing of a video camera. In this way the system becomes an embedded system which is easy to carry and easy to use. The hardware requirements for that purpose strongly depend on the application, and on the desired accuracy of the device. As an example, the device may comprise a mainboard having a size of 180 x 125 mm having a Philips
PNX1300 chip comprising a Philips TM1300 processor, and having 1 MB RAM. This extra card can be integrated into the video camera to monitor traffic on motorways. However, hardware requirements are lower for devices designed to check whether a person is intruding a building or some premises. In the latter example a low resolution camera is sufficient, such that hardware requirements are lower.
In a further preferred embodiment the processing system is implemented as a real-time system. Achieving a real-time implementation depends on the capabilities of the hardware. Even existing hardware, such as a Philips TM1300 processor, can guarantee that the 3DRS algorithm works in real-time such that there is no need to store large amounts of data for offline processing. The underlying reason is that the 3DRS algorithm is extremely efficient and robust, requiring only 7 to 10 operations per pixel depending on the actual implementations and requirements.
In a further preferred embodiment of the invention the processing system is adapted to indicate the position of a moving object or a moving person. This capability is provided by post processing the multitude of motion vectors obtained by the 3DRS- algorithm. In the easiest case a moving object, e.g. a moving car on a road, defines a region with non- vanishing motion vectors, whereby a surrounding region has vanishing motion vectors. In this way the position of the object can be defined to be the center of said region with non- vanishing motion vectors.
In another preferred embodiment the processing system is adapted to carry out an object recognition. Doing this means comparing the size and shape of objects in the frames by algorithms which are known in the prior art, e.g. in order to differentiate persons from vehicles, and to differentiate among vehicles, e.g. to differentiate between cars and lorries.
In another embodiment of the invention the processing system is adapted to carry out a number plate recognition. The number plate recognition can be done with well known algorithms based on optical character recognition which is well known to the man skilled in the art. Number plate recognition is a useful capability of the device when the device shall be used for speed detection or for identifying vehicles which have passed red traffic lights.
A second aspect of the invention refers to a method for extracting motion information from a sequence of video frames. In the first step of this method a sequence of video frames is grabbed. In a second step the digital video frames grabbed by the video camera are processed, whereby processing is done by using a recursive search block algorithm to determine whether the video frames show an object or person which is moving. Again, the algorithm works in the way as described by Gerard de Haan et al, "True motion estimation with 3D recursive search block matching", IEEE transactions on circuits and systems of video technology, volume 3, number 5, October 1993, to which this application explicitly refers to and which is incorporated by reference.
The method according to the invention has the advantage that it can be universally applied to video sequences which are not encoded. Thus the method is not encoding video sequences prior to processing them. On the contrary, if an encoded video sequence shall be processed it is necessary to decode it first, as the method uses the 3DRS algorithm processing the pixels of the frames.
Another advantage of the method is that the motion vectors calculated by the 3DRS algorithm represent the true motion of an object or person, such that there is no need to postprocess acquired motion vectors to improve their quality to an acceptable level. Still another advantage of the method is that the 3DRS algorithm is extremely efficient, even in comparison to other known block matching algorithms, such that the method is particularly fast, which makes it possible to process grabbed video sequences in real-time. With the method mentioned above the velocity of the object or person can be determined, and even a multitude of objects and persons can be determined simultaneously. The method can be used for surveillance applications such as traffic surveillance. Another area where the method can be used is for road rule enforcement cameras, in particular as a speed camera or red light camera. These and other aspects of the invention will be apparent from and elucidated with reference to the embodiments described thereafter. It should be noted that the use of reference signs shall not be construed as limiting the scope of the invention.
In the following preferred embodiments of the invention will be described in greater detail by way of example only making reference to the drawings in which:
Fig. 1 shows a digital video camera for extracting motion information,
Fig. 2 illustrates the selection of locations for speed checking,
Fig. 3 illustrates the calibration of the device for speed checking locations, Fig. 4 is a measurement indicating areas with non-vanishing motion vectors
Fig. 5 depicts a flowchart for carrying out the invention, Table 1 contains measurement values and conversion factors of the calibration.
Fig. 1 shows a device according to the invention. It comprises a digital video camera 1 having a housing 2 including a processing unit 3. Furthermore, the digital video camera has an output port 4 for communicating with an external computer (not shown), e.g. via an Ethernet cable 5. The external computer might be located in a police station. In addition, the digital video camera 1 has an optional transceiver 6 for wireless transmissions of acquired data to the remote computer.
In a feasibility study a system was used which deviated from the digital video camera 1 as shown in fig. 1. The digital video camera 1 was a Panasonic NV-DXl 1 OEG consumer video camera which is commercially available and which does need further explanation. This video camera 1 grabbed video frames at a frame rate of 25 Hz and outputted them via a 4-pin i.Link input/output port.
The outputted video sequence was transferred to a conventional notebook (not shown) and was stored in the AVI-forward at 25 Hz. For using the 3DRS-algorithm this compressed video format needed to be decoded first such that it was transferred to the
YUV422 standard. The decoded video sequence had a resolution of 720 x 576 pixels and a frame rate of 25 Hz. A computer program based on a basic 3DRS-algorithm was used for processing the unencoded video sequence, without any preprocessing or postprocessing. This algorithm was executed on the notebook mentioned above. It yielded true motion vectors giving rise to a velocity values which could be trusted. Furthermore, the true motion vectors gave rise to a robust 3DRS algorithm working efficiently and thus very fast, such that the device processed the frames in real-time without preprocessing or postprocessing.
In operation, the first step consisted in installing the digital video camera and fixing it to the bridge over a motorway. In a second step it was tested whether the digital video camera generated a video sequence and thus functioned properly. In a third step the notebook mentioned above was used to calibrate the device by means of an application software. In other words the device according to the invention comprised, in the framework of the feasibility study, the digital video camera and the notebook. The notebook represented the processing unit comprising a processor and associated memory in the sense of alternative a) mentioned above.
A first calibration step consisted in selecting locations of the motorway where a speed checking should be performed. This is illustrated with the help of fig. 2. Fig. 2 shows a motorway with three lanes with vehicles approaching the video camera. For each lane a measurement location 8, 9 and 10 is selected. Although the 3DRS-algorithm will start to estimate the speed of every object as soon as it enters the frame / image, selecting the proper positions requires some care. Good positions are not too close to the borders of the image and not too far into the background.
In the next calibration step the conversion factor conv _ factor for calculating the speed of the vehicles with the help equation 3 has been determined. This was done for each measurement locations 8, 9 and 10. For that purpose the distance of four consecutive wide stripes between the leftmost lanes as indicated by the double arrow has been inputted into the application software in units of meter. The same distance has been calculated in units of pixels, namely Δx=172.2 pixels. From this value pixels the corresponding projections of this length onto the x-axis (Δx=73 pixels) and onto the y-axis (Δy=156 pixels) has been calculated. The conversion factor is used by the processing unit to convert the distances in the x- and y- direction from pixels into meters. The conversion factors are listed in column 6 of table 1.
After calibrating the device speed measurements have been performed. The notebook served as a processing unit and used a basic 3DRS-algorithm without any preprocessing or postprocessing. The algorithm processed frames and subdivided the frames into blocks of pixels, namely 8 x 8 pixels per block.
One such measurement is illustrated with the help of fig. 4. The two cars approaching the camera are now encircled in order to indicate areas 11 and 12 where pixel blocks have been identified to have non- vanishing motion vectors. The average motion vector in the areas 11 and 12 respectively have been used to calculate the length of the motion vector with the help of equation 1. The frame rate had been 25 Hz, such that the speed of the car has been calculated with the help of equations 2 and 3 and the conversion factors listed in table 1. The results are shown in table 1. It is remarkable that the measurement values even with this experimental setup had a very high accuracy which can be calculated with the help of
, frame freq * 42 *Z , . .. velocity _ error = = (equation 4), conv _ factor whereby ε is the error in the motion vector, in this setup ε =0.25 pixel.
If it is assumed that the conversion factor is 7.50 which is the worst value in table 1, and the frame rate is 25 Hz, the velocity error is only 0.33 km/h. Even for this simplified experimental setup the accuracy can be regarded to be very good.
Once calibrated the use of the device is illustrated with the help of the flowchart of fig. 5. In step 1 a grabbing of a sequence of video frames is carried out. These video frames are processed in step 6, and the results are outputted in step 5. In the first processing step 2 the frames are analyzed by means of a 3DRS algorithm to identify a moving object within any of the measurement locations 8, 9 or 10 shown in fig. 2. A moving object, e.g. a car, exists in these areas if there are pixel blocks with non- vanishing motion vectors in this region. In step 3 the velocity associated with this moving pixel blocks is determined and it is decided whether this velocity is too fast in comparison to an allowed value, e.g. 100 km/h for a motorway. If a velocity is too fast the number plate of the vehicle is extracted from the video frames in step 4. This is done by an additional computer program module as it is known in the prior art. In step 5 the data are outputted to an external computer which might be located in a police station. The data comprise the number plate, the speed and possibly a frame/image of the vehicle driving too fast.
LIST OF REFERENCE NUMERALS:
Figure imgf000014_0001

Claims

CLAIMS:
1. Device for extracting motion information from a sequence of video frames, comprising: a) a digital video camera (1) for grabbing a sequence of video frames, b) a processing unit (3) for processing the video frames provided by the video camera, c) the processing unit being adapted to use a recursive search block algorithm to determine whether the video frames show an object or person which is moving.
2. Device according to claim 1, characterized in that the processing unit is adapted to determine the velocity of the object or the person.
3. Device according to claim 2, characterized in that the processing unit is adapted to determine simultaneously the velocity of a multitude of objects or persons.
4. Device according to claim 1, characterized in that the processing system is located in the housing (2) of the video camera.
5. Device according to claim 1, characterized in that the processing system is implemented as a real-time system.
6. Device according to claim 1, characterized in that the processing system is adapted to indicate the position of the moving object or the moving person.
7. Device according to claim 1, characterized in that the processing system is adapted to carry out an object recognition.
8. Device according to claim 1, characterized in that the processing system is adapted to carry out a number plate recognition.
9. Method by extracting motion information from a sequence of video frames, the method comprising the following steps: a) grabbing a sequence of video frames, b) processing the digital video frames, the processing being carried out by using a recursive search block algorithm to determine whether the video frames show an object or person which is moving.
10. Method according to claim 9, characterized in that the velocity of the object or the person is determined.
11. Method according to claim 9, characterized in that the velocity of a multitude of objects or persons are determined simultaneously.
12. Method according to claim 9, characterized in using it for surveillance applications such as traffic surveillance or for detecting an intrusion into a building or into premises.
13. Method according to claim 9, characterized in that it is used for a road-rule enforcement camera, in particular as a speed camera or a red light camera.
14. Computer program product, the computer program product comprising a computer readable medium, having thereon computer program code means, when said program is loaded, to make the computer executable for executing the method according to any of the claims 9 to 13.
PCT/IB2006/053488 2005-09-27 2006-09-26 Motion detection device WO2007036873A2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US12/067,962 US8135177B2 (en) 2005-09-27 2006-09-26 Motion detection device
EP06821148A EP1932352A2 (en) 2005-09-27 2006-09-26 Motion detection device
JP2008531874A JP2009510827A (en) 2005-09-27 2006-09-26 Motion detection device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP05108908 2005-09-27
EP05108908.4 2005-09-27

Publications (2)

Publication Number Publication Date
WO2007036873A2 true WO2007036873A2 (en) 2007-04-05
WO2007036873A3 WO2007036873A3 (en) 2007-07-05

Family

ID=37807867

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2006/053488 WO2007036873A2 (en) 2005-09-27 2006-09-26 Motion detection device

Country Status (7)

Country Link
US (1) US8135177B2 (en)
EP (1) EP1932352A2 (en)
JP (1) JP2009510827A (en)
KR (1) KR20080049063A (en)
CN (1) CN101273634A (en)
TW (1) TW200721041A (en)
WO (1) WO2007036873A2 (en)

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090052730A1 (en) * 2007-08-23 2009-02-26 Pixart Imaging Inc. Interactive image system, interactive apparatus and operating method thereof
WO2010077316A1 (en) * 2008-12-17 2010-07-08 Winkler Thomas D Multiple object speed tracking system
CN101854547A (en) * 2010-05-25 2010-10-06 无锡中星微电子有限公司 Motion frame in video collection and transmission system, method and system for detecting prospects
US7869935B2 (en) 2006-03-23 2011-01-11 Agilent Technologies, Inc. Method and system for detecting traffic information
US8118456B2 (en) 2008-05-08 2012-02-21 Express Imaging Systems, Llc Low-profile pathway illumination system
US8184863B2 (en) 2007-01-05 2012-05-22 American Traffic Solutions, Inc. Video speed detection system
US8508137B2 (en) 2009-05-20 2013-08-13 Express Imaging Systems, Llc Apparatus and method of energy efficient illumination
US8610358B2 (en) 2011-08-17 2013-12-17 Express Imaging Systems, Llc Electrostatic discharge protection for luminaire
GB2507395A (en) * 2012-08-31 2014-04-30 Xerox Corp Estimating vehicle speed from video stream motion vectors
US8872964B2 (en) 2009-05-20 2014-10-28 Express Imaging Systems, Llc Long-range motion detection for illumination control
US8878440B2 (en) 2012-08-28 2014-11-04 Express Imaging Systems, Llc Luminaire with atmospheric electrical activity detection and visual alert capabilities
US8896215B2 (en) 2012-09-05 2014-11-25 Express Imaging Systems, Llc Apparatus and method for schedule based operation of a luminaire
US8922124B2 (en) 2011-11-18 2014-12-30 Express Imaging Systems, Llc Adjustable output solid-state lamp with security features
US8926138B2 (en) 2008-05-13 2015-01-06 Express Imaging Systems, Llc Gas-discharge lamp replacement
US9131552B2 (en) 2012-07-25 2015-09-08 Express Imaging Systems, Llc Apparatus and method of operating a luminaire
US9185777B2 (en) 2014-01-30 2015-11-10 Express Imaging Systems, Llc Ambient light control in solid state lamps and luminaires
US9204523B2 (en) 2012-05-02 2015-12-01 Express Imaging Systems, Llc Remotely adjustable solid-state lamp
US9210751B2 (en) 2012-05-01 2015-12-08 Express Imaging Systems, Llc Solid state lighting, drive circuit and method of driving same
US9210759B2 (en) 2012-11-19 2015-12-08 Express Imaging Systems, Llc Luminaire with ambient sensing and autonomous control capabilities
US9241401B2 (en) 2010-06-22 2016-01-19 Express Imaging Systems, Llc Solid state lighting device and method employing heat exchanger thermally coupled circuit board
US9288873B2 (en) 2013-02-13 2016-03-15 Express Imaging Systems, Llc Systems, methods, and apparatuses for using a high current switching device as a logic level sensor
US9301365B2 (en) 2012-11-07 2016-03-29 Express Imaging Systems, Llc Luminaire with switch-mode converter power monitoring
US9360198B2 (en) 2011-12-06 2016-06-07 Express Imaging Systems, Llc Adjustable output solid-state lighting device
US9414449B2 (en) 2013-11-18 2016-08-09 Express Imaging Systems, Llc High efficiency power controller for luminaire
US9445485B2 (en) 2014-10-24 2016-09-13 Express Imaging Systems, Llc Detection and correction of faulty photo controls in outdoor luminaires
US9462662B1 (en) 2015-03-24 2016-10-04 Express Imaging Systems, Llc Low power photocontrol for luminaire
US9466443B2 (en) 2013-07-24 2016-10-11 Express Imaging Systems, Llc Photocontrol for luminaire consumes very low power
US9497393B2 (en) 2012-03-02 2016-11-15 Express Imaging Systems, Llc Systems and methods that employ object recognition
US9572230B2 (en) 2014-09-30 2017-02-14 Express Imaging Systems, Llc Centralized control of area lighting hours of illumination
US9713228B2 (en) 2011-04-12 2017-07-18 Express Imaging Systems, Llc Apparatus and method of energy efficient illumination using received signals
US9924582B2 (en) 2016-04-26 2018-03-20 Express Imaging Systems, Llc Luminaire dimming module uses 3 contact NEMA photocontrol socket
US9967933B2 (en) 2008-11-17 2018-05-08 Express Imaging Systems, Llc Electronic control to regulate power for solid-state lighting and methods thereof
US9985429B2 (en) 2016-09-21 2018-05-29 Express Imaging Systems, Llc Inrush current limiter circuit
US10098212B2 (en) 2017-02-14 2018-10-09 Express Imaging Systems, Llc Systems and methods for controlling outdoor luminaire wireless network using smart appliance
US10164374B1 (en) 2017-10-31 2018-12-25 Express Imaging Systems, Llc Receptacle sockets for twist-lock connectors
US10219360B2 (en) 2017-04-03 2019-02-26 Express Imaging Systems, Llc Systems and methods for outdoor luminaire wireless control
US10230296B2 (en) 2016-09-21 2019-03-12 Express Imaging Systems, Llc Output ripple reduction for power converters
US10568191B2 (en) 2017-04-03 2020-02-18 Express Imaging Systems, Llc Systems and methods for outdoor luminaire wireless control
US10904992B2 (en) 2017-04-03 2021-01-26 Express Imaging Systems, Llc Systems and methods for outdoor luminaire wireless control
US11212887B2 (en) 2019-11-04 2021-12-28 Express Imaging Systems, Llc Light having selectively adjustable sets of solid state light sources, circuit and method of operation thereof, to provide variable output characteristics
US11234304B2 (en) 2019-05-24 2022-01-25 Express Imaging Systems, Llc Photocontroller to control operation of a luminaire having a dimming line
US11375599B2 (en) 2017-04-03 2022-06-28 Express Imaging Systems, Llc Systems and methods for outdoor luminaire wireless control
US11765805B2 (en) 2019-06-20 2023-09-19 Express Imaging Systems, Llc Photocontroller and/or lamp with photocontrols to control operation of lamp

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8175332B2 (en) * 2008-05-22 2012-05-08 International Business Machines Corporation Upper troposphere and lower stratosphere wind direction, speed, and turbidity monitoring using digital imaging and motion tracking
TWI450590B (en) * 2009-04-16 2014-08-21 Univ Nat Taiwan Embedded system and method for loading data on motion estimation therein
US8926139B2 (en) 2009-05-01 2015-01-06 Express Imaging Systems, Llc Gas-discharge lamp replacement with passive cooling
US20120033123A1 (en) * 2010-08-06 2012-02-09 Nikon Corporation Information control apparatus, data analyzing apparatus, signal, server, information control system, signal control apparatus, and program
US10018703B2 (en) * 2012-09-13 2018-07-10 Conduent Business Services, Llc Method for stop sign law enforcement using motion vectors in video streams
TW201328359A (en) * 2011-12-19 2013-07-01 Ind Tech Res Inst Moving object detection method and apparatus based on compressed domain
CN102637361A (en) * 2012-04-01 2012-08-15 长安大学 Vehicle type distinguishing method based on video
TWI595450B (en) 2014-04-01 2017-08-11 能晶科技股份有限公司 Object detection system
KR101720408B1 (en) * 2015-03-20 2017-03-27 황영채 Searchlight for ships
US9538612B1 (en) 2015-09-03 2017-01-03 Express Imaging Systems, Llc Low power photocontrol for luminaire
US10015394B2 (en) 2015-10-06 2018-07-03 Genetec Inc. Camera-based speed estimation and system calibration therefor
EP4330916A1 (en) 2021-04-26 2024-03-06 Basf Se Computer-implemented method for determining an absolute velocity of at least one moving object

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5432547A (en) * 1991-11-22 1995-07-11 Matsushita Electric Industrial Co., Ltd. Device for monitoring disregard of a traffic signal
US5734337A (en) * 1995-11-01 1998-03-31 Kupersmit; Carl Vehicle speed monitoring system
US6647361B1 (en) * 1998-11-23 2003-11-11 Nestor, Inc. Non-violation event filtering for a traffic light violation detection system
US20040071319A1 (en) * 2002-09-19 2004-04-15 Minoru Kikuchi Object velocity measuring apparatus and object velocity measuring method
US6757328B1 (en) * 1999-05-28 2004-06-29 Kent Ridge Digital Labs. Motion information extraction system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4159606B2 (en) * 1996-05-24 2008-10-01 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Motion estimation
US6965645B2 (en) 2001-09-25 2005-11-15 Microsoft Corporation Content-based characterization of video frame sequences
US7764808B2 (en) 2003-03-24 2010-07-27 Siemens Corporation System and method for vehicle detection and tracking
US6970102B2 (en) * 2003-05-05 2005-11-29 Transol Pty Ltd Traffic violation detection, recording and evidence processing system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5432547A (en) * 1991-11-22 1995-07-11 Matsushita Electric Industrial Co., Ltd. Device for monitoring disregard of a traffic signal
US5734337A (en) * 1995-11-01 1998-03-31 Kupersmit; Carl Vehicle speed monitoring system
US6647361B1 (en) * 1998-11-23 2003-11-11 Nestor, Inc. Non-violation event filtering for a traffic light violation detection system
US6757328B1 (en) * 1999-05-28 2004-06-29 Kent Ridge Digital Labs. Motion information extraction system
US20040071319A1 (en) * 2002-09-19 2004-04-15 Minoru Kikuchi Object velocity measuring apparatus and object velocity measuring method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HAAN DE G ET AL: "TRUE-MOTION ESTIMATION WITH 3-D RECURSIVE SEARCH BLOCK MATCHING" IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 3, no. 5, 1 October 1993 (1993-10-01), pages 368-379, XP000414663 ISSN: 1051-8215 cited in the application *
MEYER M ET AL: "A new system for video-based detection of moving objects and its integration into digital networks" SECURITY TECHNOLOGY, 1996. 30TH ANNUAL 1996 INTERNATIONAL CARNAHAN CONFERENCE LEXINGTON, KY, USA 2-4 OCT. 1996, NEW YORK, NY, USA,IEEE, US, 2 October 1996 (1996-10-02), pages 105-110, XP010199874 ISBN: 0-7803-3537-6 *

Cited By (61)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7869935B2 (en) 2006-03-23 2011-01-11 Agilent Technologies, Inc. Method and system for detecting traffic information
US8213685B2 (en) 2007-01-05 2012-07-03 American Traffic Solutions, Inc. Video speed detection system
US8184863B2 (en) 2007-01-05 2012-05-22 American Traffic Solutions, Inc. Video speed detection system
US9002068B2 (en) 2007-01-05 2015-04-07 American Traffic Solutions, Inc. Video speed detection system
US8600116B2 (en) 2007-01-05 2013-12-03 American Traffic Solutions, Inc. Video speed detection system
US20090052730A1 (en) * 2007-08-23 2009-02-26 Pixart Imaging Inc. Interactive image system, interactive apparatus and operating method thereof
DE102008024462B4 (en) * 2007-08-23 2019-11-14 Pixart Imaging Inc. Interactive imaging system, interactive device and method of operation thereof
US8553094B2 (en) * 2007-08-23 2013-10-08 Pixart Imaging Inc. Interactive image system, interactive apparatus and operating method thereof
US8118456B2 (en) 2008-05-08 2012-02-21 Express Imaging Systems, Llc Low-profile pathway illumination system
US8926138B2 (en) 2008-05-13 2015-01-06 Express Imaging Systems, Llc Gas-discharge lamp replacement
US9967933B2 (en) 2008-11-17 2018-05-08 Express Imaging Systems, Llc Electronic control to regulate power for solid-state lighting and methods thereof
WO2010077316A1 (en) * 2008-12-17 2010-07-08 Winkler Thomas D Multiple object speed tracking system
US8284996B2 (en) 2008-12-17 2012-10-09 Automated Speed Technologies, LLC Multiple object speed tracking system
US8987992B2 (en) 2009-05-20 2015-03-24 Express Imaging Systems, Llc Apparatus and method of energy efficient illumination
US8810138B2 (en) 2009-05-20 2014-08-19 Express Imaging Systems, Llc Apparatus and method of energy efficient illumination
US8872964B2 (en) 2009-05-20 2014-10-28 Express Imaging Systems, Llc Long-range motion detection for illumination control
US8541950B2 (en) 2009-05-20 2013-09-24 Express Imaging Systems, Llc Apparatus and method of energy efficient illumination
US8508137B2 (en) 2009-05-20 2013-08-13 Express Imaging Systems, Llc Apparatus and method of energy efficient illumination
CN101854547A (en) * 2010-05-25 2010-10-06 无锡中星微电子有限公司 Motion frame in video collection and transmission system, method and system for detecting prospects
CN101854547B (en) * 2010-05-25 2013-05-08 无锡中星微电子有限公司 Motion frame in video collection and transmission system, method and system for detecting prospects
US9241401B2 (en) 2010-06-22 2016-01-19 Express Imaging Systems, Llc Solid state lighting device and method employing heat exchanger thermally coupled circuit board
US9713228B2 (en) 2011-04-12 2017-07-18 Express Imaging Systems, Llc Apparatus and method of energy efficient illumination using received signals
US8610358B2 (en) 2011-08-17 2013-12-17 Express Imaging Systems, Llc Electrostatic discharge protection for luminaire
US8922124B2 (en) 2011-11-18 2014-12-30 Express Imaging Systems, Llc Adjustable output solid-state lamp with security features
US9360198B2 (en) 2011-12-06 2016-06-07 Express Imaging Systems, Llc Adjustable output solid-state lighting device
US9497393B2 (en) 2012-03-02 2016-11-15 Express Imaging Systems, Llc Systems and methods that employ object recognition
US9210751B2 (en) 2012-05-01 2015-12-08 Express Imaging Systems, Llc Solid state lighting, drive circuit and method of driving same
US9204523B2 (en) 2012-05-02 2015-12-01 Express Imaging Systems, Llc Remotely adjustable solid-state lamp
US9131552B2 (en) 2012-07-25 2015-09-08 Express Imaging Systems, Llc Apparatus and method of operating a luminaire
US9801248B2 (en) 2012-07-25 2017-10-24 Express Imaging Systems, Llc Apparatus and method of operating a luminaire
US8878440B2 (en) 2012-08-28 2014-11-04 Express Imaging Systems, Llc Luminaire with atmospheric electrical activity detection and visual alert capabilities
GB2507395B (en) * 2012-08-31 2019-08-21 Conduent Business Services Llc Video-based vehicle speed estimation from motion vectors in video streams
GB2507395A (en) * 2012-08-31 2014-04-30 Xerox Corp Estimating vehicle speed from video stream motion vectors
US9582722B2 (en) 2012-08-31 2017-02-28 Xerox Corporation Video-based vehicle speed estimation from motion vectors in video streams
US8896215B2 (en) 2012-09-05 2014-11-25 Express Imaging Systems, Llc Apparatus and method for schedule based operation of a luminaire
US9693433B2 (en) 2012-09-05 2017-06-27 Express Imaging Systems, Llc Apparatus and method for schedule based operation of a luminaire
US9301365B2 (en) 2012-11-07 2016-03-29 Express Imaging Systems, Llc Luminaire with switch-mode converter power monitoring
US9210759B2 (en) 2012-11-19 2015-12-08 Express Imaging Systems, Llc Luminaire with ambient sensing and autonomous control capabilities
US9433062B2 (en) 2012-11-19 2016-08-30 Express Imaging Systems, Llc Luminaire with ambient sensing and autonomous control capabilities
US9288873B2 (en) 2013-02-13 2016-03-15 Express Imaging Systems, Llc Systems, methods, and apparatuses for using a high current switching device as a logic level sensor
US9466443B2 (en) 2013-07-24 2016-10-11 Express Imaging Systems, Llc Photocontrol for luminaire consumes very low power
US9781797B2 (en) 2013-11-18 2017-10-03 Express Imaging Systems, Llc High efficiency power controller for luminaire
US9414449B2 (en) 2013-11-18 2016-08-09 Express Imaging Systems, Llc High efficiency power controller for luminaire
US9185777B2 (en) 2014-01-30 2015-11-10 Express Imaging Systems, Llc Ambient light control in solid state lamps and luminaires
US9572230B2 (en) 2014-09-30 2017-02-14 Express Imaging Systems, Llc Centralized control of area lighting hours of illumination
US9445485B2 (en) 2014-10-24 2016-09-13 Express Imaging Systems, Llc Detection and correction of faulty photo controls in outdoor luminaires
US9462662B1 (en) 2015-03-24 2016-10-04 Express Imaging Systems, Llc Low power photocontrol for luminaire
US9924582B2 (en) 2016-04-26 2018-03-20 Express Imaging Systems, Llc Luminaire dimming module uses 3 contact NEMA photocontrol socket
US9985429B2 (en) 2016-09-21 2018-05-29 Express Imaging Systems, Llc Inrush current limiter circuit
US10230296B2 (en) 2016-09-21 2019-03-12 Express Imaging Systems, Llc Output ripple reduction for power converters
US10098212B2 (en) 2017-02-14 2018-10-09 Express Imaging Systems, Llc Systems and methods for controlling outdoor luminaire wireless network using smart appliance
US10219360B2 (en) 2017-04-03 2019-02-26 Express Imaging Systems, Llc Systems and methods for outdoor luminaire wireless control
US10390414B2 (en) 2017-04-03 2019-08-20 Express Imaging Systems, Llc Systems and methods for outdoor luminaire wireless control
US10568191B2 (en) 2017-04-03 2020-02-18 Express Imaging Systems, Llc Systems and methods for outdoor luminaire wireless control
US10904992B2 (en) 2017-04-03 2021-01-26 Express Imaging Systems, Llc Systems and methods for outdoor luminaire wireless control
US11375599B2 (en) 2017-04-03 2022-06-28 Express Imaging Systems, Llc Systems and methods for outdoor luminaire wireless control
US11653436B2 (en) 2017-04-03 2023-05-16 Express Imaging Systems, Llc Systems and methods for outdoor luminaire wireless control
US10164374B1 (en) 2017-10-31 2018-12-25 Express Imaging Systems, Llc Receptacle sockets for twist-lock connectors
US11234304B2 (en) 2019-05-24 2022-01-25 Express Imaging Systems, Llc Photocontroller to control operation of a luminaire having a dimming line
US11765805B2 (en) 2019-06-20 2023-09-19 Express Imaging Systems, Llc Photocontroller and/or lamp with photocontrols to control operation of lamp
US11212887B2 (en) 2019-11-04 2021-12-28 Express Imaging Systems, Llc Light having selectively adjustable sets of solid state light sources, circuit and method of operation thereof, to provide variable output characteristics

Also Published As

Publication number Publication date
US20080205710A1 (en) 2008-08-28
WO2007036873A3 (en) 2007-07-05
US8135177B2 (en) 2012-03-13
KR20080049063A (en) 2008-06-03
EP1932352A2 (en) 2008-06-18
TW200721041A (en) 2007-06-01
JP2009510827A (en) 2009-03-12
CN101273634A (en) 2008-09-24

Similar Documents

Publication Publication Date Title
US8135177B2 (en) Motion detection device
US9286516B2 (en) Method and systems of classifying a vehicle using motion vectors
JP4451330B2 (en) Method for detecting traffic events in compressed video
EP2803944B1 (en) Image Processing Apparatus, Distance Measurement Apparatus, Vehicle-Device Control System, Vehicle, and Image Processing Program
US9159137B2 (en) Probabilistic neural network based moving object detection method and an apparatus using the same
US9442176B2 (en) Bus lane infraction detection method and system
EP1030188B1 (en) Situation awareness system
GB2507395B (en) Video-based vehicle speed estimation from motion vectors in video streams
US10825310B2 (en) 3D monitoring of sensors physical location in a reduced bandwidth platform
KR20160062880A (en) road traffic information management system for g using camera and radar
WO2008020598A1 (en) Subject number detecting device and subject number detecting method
KR101874352B1 (en) VMS, In-vehicle terminal and intelligent transport system including the same for efficient transmission of traffic information on the road
JP6915219B2 (en) Computer implementation methods, imaging systems, and image processing systems
CN116824859A (en) Intelligent traffic big data analysis system based on Internet of things
KR102163774B1 (en) Apparatus and method for image recognition
Luo et al. Vehicle flow detection in real-time airborne traffic surveillance system
KR101073053B1 (en) Auto Transportation Information Extraction System and Thereby Method
Lin et al. Airborne moving vehicle detection for urban traffic surveillance
Habib et al. Lane departure detection and transmission using Hough transform method
Hetzel et al. Smart infrastructure: A research junction
CN115174889A (en) Position deviation detection method for camera, electronic device, and storage medium
Cai et al. Adaptive feature annotation for large video sensor networks
Kogut et al. A wide area tracking system for vision sensor networks
Khorramshahi et al. Over-height vehicle detection in low headroom roads using digital video processing
Hu et al. A high efficient system for traffic mean speed estimation from mpeg video

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2006821148

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2008531874

Country of ref document: JP

Ref document number: 12067962

Country of ref document: US

Ref document number: 1020087007154

Country of ref document: KR

WWE Wipo information: entry into national phase

Ref document number: 200680035666.4

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

WWP Wipo information: published in national office

Ref document number: 2006821148

Country of ref document: EP