GB2343317A - Video motion detection - Google Patents

Video motion detection Download PDF

Info

Publication number
GB2343317A
GB2343317A GB9823400A GB9823400A GB2343317A GB 2343317 A GB2343317 A GB 2343317A GB 9823400 A GB9823400 A GB 9823400A GB 9823400 A GB9823400 A GB 9823400A GB 2343317 A GB2343317 A GB 2343317A
Authority
GB
United Kingdom
Prior art keywords
field
image
motion
video signal
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB9823400A
Other versions
GB9823400D0 (en
GB2343317B (en
Inventor
Stephen Mark Keating
Andrew Patrick Compton
Stephen John Forde
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Europe Ltd
Original Assignee
Sony United Kingdom Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony United Kingdom Ltd filed Critical Sony United Kingdom Ltd
Priority to GB9823400A priority Critical patent/GB2343317B/en
Publication of GB9823400D0 publication Critical patent/GB9823400D0/en
Publication of GB2343317A publication Critical patent/GB2343317A/en
Application granted granted Critical
Publication of GB2343317B publication Critical patent/GB2343317B/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Systems (AREA)

Abstract

A video motion detector comprises a frame-based difference detector 401 for detecting an image difference between a test area of a current field of a video signal and a corresponding area of another field of the video signal temporally separated from the current field by one frame; filter means 411 for at least vertically low-pass filtering the current field and a temporally adjacent field of the video signal; a field-based difference detector 412 for detecting an image difference between a test area of the current field as filtered by the filter means and a corresponding area of the temporally adjacent field as filtered by the filter means; and output means operable to generate an output signal 450 indicative of a degree of motion in the test area by a non-additive mixing 420 of the image differences generated by the frame-based difference detector and the field-based difference detector.

Description

VIDEO MOTION DETECTION This invention relates to a video motion detection.
Motion detection in various forms is often used in video processing systems.
In some examples, a"motion vector"is generated, to represent the actual motion of a particular part of an image between successive images of a video signal. However, in other applications a simpler approach is used, in that it is necessary only to detect whether a particular part of an image moving or stationary between successive images.
An example of the use of this latter technique is an image processing system in which pixels of an output interlaced image can be generated from interlaced images of an input video signal by two different algorithms, such as frame-based interpolation or field-based interpolation, in dependence on the detection of motion for those pixels.
Frame-based interpolation, in which pixels from two or more consecutive fields are used to generate an output pixel, can give better spatial resolution than field-based interpolation where pixels from only one field are used. So, frame-based interpolation is the better choice when motion is not detected, but field-based interpolation gives a better result for moving portions of an image and so is the better choice where motion is detected.
Previously proposed motion detectors have detected motion by detecting the difference between a block of pixels from a field at time t and the a corresponding block from a field at time t-2, a technique referred to as a frame difference detection (the two test fields being separated by one frame). Figure 1 schematically illustrates such a motion detector.
The comparison is made between fields of the same polarity field signals to avoid the field interlace giving rise to problems with high vertichewuencies causing :-. y, y aliasing. Successive pairs of pixels at corresponding positions frlds t (i. e. a current field) and t-2 (i. e. the preceding field of the same polarity) are passed to a difference detector where the difference in luminance between each pair of pixels is detected. A low pass filter (LPF) 20 filters the output of the difference detector 10 to smooth out sudden changes in the difference detection.
At the output of the LPF 20, absolute value logic 30 detects an absolute value of the (positive or negative) difference values. The absolute difference values are then averaged over a block of 5 x 3 pixels by an averaging circuit 40. The averaged value is then compared with a threshold value by a comparator 50, to generate a motion detection signal 60.
If the averaged difference value is detected to be greater than the threshold value then the motion detection signal is set to a state indicating"motion". If the averaged difference value is less than the threshold value then the motion detection signal indicates"no motion". (Clearly, if the two signals are equal then the output state of the motion detection signal is set by convention. Also, this description assumes that the averaged difference value increases numerically with increasing pixel difference, although of course the opposite polarity could be used for this signal).
A problem arises with this previously proposed system when a pattern is moving at greater than one block of pixels per field. An example of this situation is shown in Figure 2a of the accompanying drawings. In Figure 2a, an object 100 is moving from right to left at a rate of two pixel blocks 110 per field period. The position of the block over three consecutive fields t-2, t-1 and t is shown.
A frame difference technique is used for motion detection in this previously proposed system. In particular, corresponding blocks of pixels in two fields separated in time by one frame (two field periods) are compared and the image difference compared with a threshold image difference. The image difference in this example, l Fieldt-Field, 2 l, is shown at the bottom of Figure 2a. Two areas of image motion 120 are identified, but image motion is not detected at a block 130-the position of the moving object in field t-1.
A similar phenomenon is illustrated in Figure 2b, where a repeating pattern is moving along the screen at a rate of an integral number of pattern-cycles per field.
This might happen, for example, if a camera was panned across a scene containing a pattern such as a picket fence.
As shown in Figure 2b, the function IFieldt-Fieldt-21 picks up the extreme ends of the pattern 105 but omits important areas 107 which ought to be flagged as containing motion.
So, if the motion detection described with reference to Figure 2a and Figure 2b were used to control a selection between a field-based interpolation and a frame-based interpolation as mentioned earlier, a field-based interpolation would be selected for the areas 120 but a frame-based (no motion) interpolation technique would be incorrectly selected for the block 130 or the area 107. For Figure 2a, when output pixels are interpolated for the block 130 of an output field at time t using pixels from the corresponding blocks in input fields t and t-1, the result would be a combination of a block containing the object 100 and a block not containing the object 100. A similar result would occur for Figure 2b. This would give a subjectively disturbing double image.
The invention provides a video motion detector comprising : a frame-based difference detector for detecting an image difference between a test area of a current field of a video signal and a corresponding area of another field of the video signal temporally separated from the current field by one frame ; filter means for at least vertically low-pass filtering the current field and a temporally adjacent field of the video signal ; a field-based difference detector for detecting an image difference between a test area of the current field as filtered by the filter means and a corresponding area of the temporally adjacent field as filtered by the filter means ; and output means operable to generate an output signal indicative of a degree of motion in the test area by a non-additive mixing of the image differences generated by the frame-based difference detector and the field-based difference detector.
The invention recognises that the problems described above cannot simply be addressed by using a comparison of temporally adjacent fields i. e. lFieldt-Fieldt l , rather than fields separated by a frame difference as before. Although this solution might initially look attractive, appearing to ensure that motion is detected under the circumstances shown in Figure 2, it in fact produces problems of its own. These problems are caused by interlace, and can mean that the fields contain vertical frequencies that are above the Nyquist frequency for an individual field. When a comparison is made between fields of a different polarity the result can be a spurious, aliased, difference signal that may be incorrectly interpreted as motion.
Instead, the invention addresses the disadvantages of the prior art by providing a motion detection circuit that uses both the frame difference and the field difference to determine motion. Thus motion is detected even in the case of an image moving at speeds that are greater than one block per field thereby alleviating the double image problem mentioned above.
The use of low pass vertical filters on the inputs to the field difference circuit ensures that problems due to high vertical frequencies being detected as motion are reduced.
In embodiments of the invention, the motion detector is employed in a signal processing circuit for processing the output of a charge coupled device (CCD) video camera. In this case, the output characteristics of the CCD device itself can be used to contribute a part of the low-pass filter response applied to the input fields and also to avoid problems which can arise when a low pass filter is implemented digitally.
Specifically, a digital implementation of a low pass filter has a response which is mirrored about the Nyquist frequency. In the case of a low pass filter operating at the field sampling rate, since the field Nyquist frequency (in the vertical direction) is half the frame vertical Nyquist frequency, this property of digitally-implemented filters means that the low-pass filter response will be mirrored around the field Nyquist frequency to become a high pass response for frequencies near to the frame Nyquist frequency. In principle, this could lead to highly aliased and/or noise signals being passed to the motion detection process, which could in turn lead to an incorrect detection of motion.
However, this problem is unexpectedly and advantageously alleviated when the technique is applied to the output of a CCD device. Because of the scanning technique used to read data from a CCD device, the data has a general roll-off in frequency content towards the frame Nyquist frequency (where the content tends towards zero).
So, even if the digitally implemented low pass filter used to enable field-based image differences to be assessed has a response mirrored to a high pass response near the frame Nyquist frequency, the actual frequency content of a CCD signal near the frame Nyquist frequency is very small and so the potential problems caused by this spurious high pass response are dramatically reduced.
Embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings, in which : Figure I schematically illustrates a frame difference motion detection circuit ; Figure 2a is a schematic diagram showing the frame difference signal of a pattern moving at two blocks per field ; Figure 2b is a schematic diagram showing the frame difference signal of a cyclic pattern moving an integral number of pattern cycles per field ; Figure 3 is a schematic diagram of a video camera apparatus ; Figure 4 schematically illustrates a video signal processing circuit ; Figure 5 schematically illustrates a motion detection circuit ; Figure 6 schematically illustrates a filter response and the output spectrum of a CCD image pickup device ; and Figure 7 schematically illustrates a field shifting operation.
Figure 3 schematically illustrates a video camera apparatus comprising a camera body 200 and a lens 210. Inside the camera body are a CCD image pick up device 220, signal processing circuitry 230 for reading a video signal from the CCD image pick up device 220, and lens aberration correction circuitry 240 for correcting aberrations caused by the particular lens in use.
The lens aberration correction circuitry 240 will be described in much greater detail below, but in summary the circuitry receives an aberration signal 250 defining distortions introduced by the lens 210 and applies corresponding corrections to the images from the image pick up device.
The aberration signal 250 from the lens 210 in fact comprises two lens output voltages : Vabr representing lateral chromatic aberration introduced by that lens and Vdj5 representing the distortion introduced by that lens. The two lens output voltages are generated by circuitry within the lens in dependence on a current zoom, iris and/or focus setting, according to measurement data produced by the lens manufacturer indicating the expected lens aberrations at that particular zoom, iris and focus setting.
Techniques for the generation and output of these lens aberration signals are well established in the art.
The two lens output voltages Vabr and Vdjs are defined by the following equations : Vabr = C (mm) x 72. 7 (mV) + 2. 5 (V) where C is the aberration at 4. 4mm distance from the image centre, mV represents a quantity in millivolts and V represents a quantity in volts ; and : Vdj5 = 60 x P + 2. 5 (V) where d (mm) = (1. 252xY Y3) xPx44 Y (mm) = (distance from the image centre)/4. 4 (mm) and d (mm) is the positional error (deflection) at Y mm from the image centre.
So, the lens aberration correction circuitry determines spatial distortions applied by the lens 210 to the image picked up by the image pickup device 220 and, using horizontal and vertical interpolation techniques applies corrections to restore the image to its undistorted form.
The video signal as corrected by the lens aberration correction circuitry 240 is output for further signal handling, e. g. recording, transmission, mixing etc. It is also routed to a monitor screen 260 within the camera body 200, so that the currently picked up image can be viewed via a viewfinder 270.
Figure 4 schematically illustrates a video processing apparatus embodying the lens aberration circuitry 240. The circuitry of Figure 4 receives a digital video signal from the signal processor 230 and the aberration signal from the lens 210, and generates an output video signal 280 to form the output from the camera apparatus.
The lens aberration circuitry comprises an error correction module 300, a horizontal interpolator 310, a vertical field-based interpolator 320, a vertical framebased interpolator 330, frame and field delays 340, a motion detector 350 and an output mixer 360.
As mentioned above, the lens aberration signal comprises two lens output voltages Vabr and Vds. In order to make use of these in the circuitry for correcting lens-introduced distortions, these output voltages have to be converted into pixel corrections to be applied at various positions in each field of the video signal.
Equations defining this technique will now be described. The equations make use of a number of variables to be listed below, and for each variable the value applicable to the present embodiment is given in brackets : SIZEX-no of pixels of CCD (1920) SIZEY-no of frame lines of CCD (1080) CCDX-horizontal size in mm of CCD (9. 587) CCDY-vertical size in mm of CCD (5. 382) XPOS-horizontal position in pixels from centre of CCD YPOS-vertical position in frame lines from centre of CCD Define x, the distance from the centre of the CCD in mm for XPOS pixels : CCDX x (mm) =-. XPOS SIZEX Define y, the distance from the centre of the CCD in mm for YPOS fame lines : CC DY y (mm) =-. YPOS SIZEY In the error correction module 300, the lens output voltage Vil ils supplied to a distortion corrector 302 and the lens output Vabr is supplied to a chromatic aberration corrector 304. These each generate values for the distortion (as a number of pixels of image displacement) in the horizontal (x) and vertical (y) image directions for each position within the image. The distortion signals are added together by an adder 306 to form an error signal 308 supplied to the horizontal interpolator 310, the vertical fieldbased interpolator 320 and the vertical frame-based interpolator 330.
The distortion corrector 302 calculates the following equations using the variable P described earlier :
z z yz 44P X (1. 25 ~ (X + 4. 4 4. 4 Xdistortion (pixels) = CC DX/ /SIZEX
2 2 4. 4P. y. (1. 25-x +y)) 4. 4 4. 4 Ydistortion (pixels) CCDY/ /SIZEY The chromatic aberration corrector 304 carries out the following calculations using the varaible C defined earlier: C. XPOS Xchromatic(pixels) = 4.4 C. YPOS Ychromatic(pixels) = 4.4 So, the error signal 308 comprises an X component and a Y component giving the horizontal and vertical distortion represented by the lens output voltages Vdi, and Vabr at each position in the image, measured in terms of pixels in the horizontal direction and lines in the vertical direction.
The horizontal interpolator 310 comprises a pixel delay and integer pixel shift circuit 312 and an 11 tap 64 sub-position interpolation filter 314, although larger filters could be used. These operate in a standard fashion to interpolate each pixel along each line of the image from other pixels disposed along the same line, but applying any necessary horizontal shift to correct the horizontal component of the image distortion specified by the error signal 308.
So, for example, if the error signal 308 specifies that at a current image position, the lens has introduced a horizontal shift to the left of, say, 3. 125 pixels, the horizontal interpolator 310 will interpolate a current output pixel based on an interpolation position 3. 125 pixels to the right of the current output pixel's position.
Horizontal interpolation used in this manner for scaling images is well established in the art.
The particular interpolation filter used in this example is an 11 tap 64 subposition filter, which means that horizontal pixel shifts can be defined to sub-pixel accuracy, and in particular to an accuracy of one sixty-fourth of the separation between two adjacent pixels. Again, this technique is very well established in the art.
Accordingly, the horizontal interpolator 310 outputs lines of pixels in which the horizontal component of distortion defined by the lens output voltages Vdi, and Vb, has been corrected. These lines of pixels are supplied in parallel to the vertical fieldbased interpolator 320 and the vertical frame-based interpolator 330.
The vertical interpolators 320, 330 act to correct the vertical component of the distortions specified by the lens output voltages Vdi, and Vab, in a roughly similar manner to the horizontal correction applied by the horizontal interpolator 310. So, if the distortion specified by the lens output voltages at a particular position in the image is a vertical shift of, say, 7. 375 lines upwards, the vertical interpolators will generate pixels at that image position by vertical interpolation about a centre position in the input field which is 7. 375 lines below the position corresponding to the current output pixel.
Each of the vertical field-based interpolator and the vertical frame-based interpolator comprises a series of first-in-first-out buffers (FIFOs) 322, 332 providing line delays for the interpolation process, a multiplexer 324, 334 providing integer field line shifts and an 11 tap 32 sub-position interpolation filter 326, 336 to provide subline accuracy in the interpolation process. In addition the vertical frame-based interpolator has a field delay 338 to provide two adjacent fields for use in the interpolation process.
The vertical field-based interpolator 320 and the vertical frame-based interpolator 330 operate in parallel to produce respective versions of each output pixel.
The selection of which version to use in forming the output video signal 280 is made by the output mixer 360 under the control of the motion detector 350.
The reason for this choice of the two versions of each output pixel is as follows. The vertical frame-based interpolator uses pixels from two adjacent fields to generate output pixels by interpolation. Because the two adjacent fields are interlaced, this gives much better spatial resolution and avoids alias problems when compared to vertical field interpolation where pixels from only one output field are used. However, if there is motion present in the part of the image currently being interpolated, the use of two temporally separated fields (in the frame-based interpolator) will lead to an inferior output because the image will have moved between the two fields.
Accordingly, if motion is detected by the motion detector 350 in a sub-area surrounding a current output pixel, that output pixel is selected to be the version produced by the field-based interpolator 320. Conversely, if motion is not detected in the sub-areas surrounding the current output pixel, that pixel is selected to be the version generated by the vertical frame-based interpolator 330. This selection is made by the output mixer 360 under the control of a motion signal 370 supplied by the motion detector 350.
The output mixer can work in another way, by operating a"soft threshold".
For example, if the threshold is x and a"degree of softness"is y, then for any degree of motion generated by the motion detector which is less than (x-y), the pixel from the frame-based interpolator will form the output pixel. If the degree of motion is grater than (x+y) then the pixel from the field-based interpolator will be used. Between these two levels, the output mixer combines the two possible output pixels in a normalise additive combination so that in the range ( (x-y) < degree of motion < (x + y)), the proportion of the frame-based pixel varies linearly with respect to degree of motion.
Figure 5 schematically illustrates the motion detector 350 in greater detail.
The motion detector 350 receives pixels of three temporally adjacent fields from the frame/field delays 340, namely a current field at a time t, the temporally preceding field t-1 and the field preceding that, t-2. The motion detector comprises a frame-based motion detector 400, a field-based motion detector 410, a non additive mixer (NAM) 420 and a threshold comparator 430.
In basic terms, the motion detector 350 operates to detect motions by two techniques in parallel. The frame-based motion detector 400 detects motion between corresponding blocks of pixels in two fields of the same polarity, namely field t and t2. The field-based motion detector 410 detects differences between corresponding blocks of pixels in two adjacent fields of opposite polarities, namely fields t and t-1.
After a scaling process (see below) the image differences detected by the two motion detectors 400, 410 are combined by the NAM 420 before being compared to a threshold image difference 440 by the comparator 430.
If the comparator 430 detects that the output of the NAM 420 is greater than the threshold 440, an output motion signal 450 is set to a state indicating that motion is present in the block surrounding the current output pixel. If the output of the NAM 420 is less than the threshold 440, the motion signal 450 is set to a state indicating that motion is not present in the block of pixels surrounding the current output pixel.
(If the output of the NAM 420 is equal to the threshold 440, the setting applies to the motion signal 450 can be selected by convention. In the present embodiment, if the NAM output is equal to the threshold 440, it is considered by convention that motion is indeed present.) Considering first the frame-based motion detector, pixels of the two input fields t and t-2 are supplied to a difference detector 401 and in parallel to a pair of high pass filters 402 (or alternatively high pass vertical filters NAMed with high pass horizontal filters-which may also be used in the field-based detector). The difference detector operates to detect the difference in luminance between pixels at corresponding positions in the two fields. The high pass filters 402 operate to detect a degree of detail present in the two fields by extracting a high frequency component from the bit stream representing the two fields. The output of the two high pass filters 402 are processed 403 to generate a gain control signal 404 which is supplied with the luminance difference detected by the luminance detector 401, to a multiplier 405.
The gain control signal 404 is formed so that as the high frequency component (a representation of image detail) of one or both of the input fields increases, the image difference as detected by the difference detector 401 is scaled down so that the value passed to the low pass filter 406 appears to represent a smaller image difference. This can be achieved by a number of ways, but the example used here is to calculate the gain control signal as follows : gain control signal = k-max (HPFl, HPF2 The use of the gain control signal to reduce the sensitivity of the frame-based motion detector to image difference when there is a large detail (high spatial frequency) component avoids the problem that the block averaging process described below could otherwise lead to the frame-based motion detector arriving at the same result for a low contrast edge or object moving quickly as for a high contrast edge moving slowly. This of course would not be an appropriate response ; in the example given above, frame-based interpolation could well be the better choice for a high contrast edge moving slowly, but the system would be forced to use field-based interpolation because of the erroneous detection of motion. So, the sensitivity to image difference is reduced, in this embodiment by scaling down the detected difference signal, as the amount of detail detected by the HPFs increases.
The output of the multiplier 405 is supplied to a low pass filter 406 which filters the gain-controlled difference signal to smooth out sudden changes in the difference detection. Absolute value logic 407 then detects the absolute value of the (positive or negative) difference values. The absolute difference values are then averaged over a block of 5 x 3 pixels by an averaging circuit 408, with the average being supplied to the NAM 420.
Turning to the field-based difference detector 410, the two fields are supplied in parallel to vertical low pass filters 411 which are arranged to remove high frequency components from the fields to avoid alias problems and to provide a fractional line shift to each of the two (opposite polarity) fields so that a proper comparison can be made by a difference detector 412. This process of line-shifting is performed using interpolation filters and is described in more detail below. The filtered pixels are then supplied to the difference detector 412, a low pass filter 413 and an absolute value detector 414 as described above. The output of the absolute value detector 414 is multiplied by a scaling coefficient a before the scaled result is passed to an averaging circuit 415. The averaged output is supplied to the NAM 420.
The coefficient oc can be adjusted by the user by means of an adjustment control on the camera body (not shown), so that the user can vary the relative response of the frame-based and field-based motion detectors.
The NAM 420 combines its two inputs so that the output of the NAM represents the greater of the two inputs.
Figure 6 illustrates the effect of combining the filter response of the LPFs 411 with the frequency spectrum output by the CCD pick-up device 220. Particularly when the CCD device is operating in a field mode without a so-called EVS facility, higher vertical frequencies are attenuated in the CCD's output. This is shown by the curve marked"CCD output".
The frequency response of each of the two LPFs 411 is illustrated by the curve marked"low pass filter". This frequency response provides a generally increasing attenuation up to 0. 4 x field Nyquist, but because of the way that digital filters of this type are implemented the response is reflected around the field Nyquist frequency to form a high pass response at and near frame Nyquist.
However, the combination of the CCD output response and the LPF response (shown in dotted line in Figure 6) means that this reflected response of the LPFs 411 to a high pass response near frame Nyquist is not a problem. If it is desired to decrease the high frequency components allowed through by the combined response, the scaling factor a can be adjusted to reduce the contribution of the field-based detector to the overall motion detection process.
In other embodiments, the coefficient a can be controlled automatically by control logic (not shown) in response to the EVS setting of the camera. EVS stands for Enhanced Vertical resolution System and controls the amount of averaging between adjacent lines of a CCD as the image is read out. As EVS is increased, the averaging decreases so that the amount of vertical detail increases, but then so does the amount of vertical aliasing.
As EVS is increased, the coefficient a can be varied (in this example, numerically reduced) so as to reduce the sensitivity of the field-based motion detector in comparison to that of the frame-based motion detector, so that the increased high frequency components allowed through by the effect of EVS on the CCD's response are attenuated by the reduced scaling factor a.
Finally, Figure 7 schematically illustrates the line shifting process also performed by the LPFs 411. A proper comparison of the two fields cannot be made as the lines of the two fields are not temporally aligned because of interlace. So, the LPFs 411 also vertically interpolate output pixels as though the lines of one field had been shifted down and those of the other field shifted up to give temporal alignment between the two sets of lines.

Claims (12)

  1. CLAIMS 1. A video motion detector comprising : a frame-based difference detector for detecting an image difference between a test area of a current field of a video signal and a corresponding area of another field of the video signal temporally separated from the current field by one frame ; filter means for at least vertically low-pass filtering the current field and a temporally adjacent field of the video signal ; a field-based difference detector for detecting an image difference between a test area of the current field as filtered by the filter means and a corresponding area of the temporally adjacent field as filtered by the filter means ; and output means operable to generate an output signal indicative of a degree of motion in the test area by a non-additive mixing of the image differences generated by the frame-based difference detector and the field-based difference detector.
  2. 2. A motion detector according to claim 1, in which the frame-based difference detector and the field-based difference detector are arranged to operate substantially in parallel.
  3. 3. A motion detector according to claim 1 or claim 2, in which the output means comprises a non additive mixer arranged to combine the image difference detected by the frame-based difference detector and the image difference detected by the fieldbased difference detector so as to generate a composite image difference indicative of the greater of the two detected image differences.
  4. 4. A motion detector according to claim 3, comprising a scaling circuit for scaling the image difference detected by either or both of the field-based difference detector and the frame-based difference detector so as to introduce a relative scaling factor of a between the two image differences.
  5. 5. A motion detecting circuit according to claim 4, further comprising useroperable adjustment means for adjusting the value of a.
  6. 6. Video signal processing apparatus for processing images of an input video signal, the apparatus comprising : a motion detector according to any one of the preceding claims, the motion detector being arranged to detect whether motion is present in each of a plurality of image areas in images of the video signal ; and a video signal processor for processing images of the video signal, the video signal processor having at least a first and a second mode of operation, the first or the second mode of operation being selected for processing each image area in response to a detection by the motion detector of a degree of image motion present in that image area.
  7. 7. Video camera apparatus comprising : an image pickup device for generating a video signal ; a lens arrangement for focusing light onto the image pickup device, the lens being operable to generate a lens aberration signal in response at least to a current focus and/or zoom setting ; a video signal processor according to claim 6, the video signal processor being operable to process the video signal in accordance with the lens aberration signal.
  8. 8. A method of video motion detection comprising the steps of : detecting a frame-based image difference between a test area of a current field of a video signal and a corresponding area of another field of the video signal temporally separated from the current field by one frame ; at least vertically low-pass filtering the current field and a temporally adjacent field of the video signal ; detecting a field-based image difference between a test area of the current field as filtered by the filter means and a corresponding area of the temporally adjacent field as filtered by the filter means ; and generating an output signal indicative of a degree of motion in the test area by a non-additive mixing of the image differences generated by the frame-based difference detector and the field-based difference detector.
  9. 9. A video motion detector substantially as hereinbefore described with reference to the accompanying drawings.
  10. 10. Video camera apparatus substantially as hereinbefore described with reference to the accompanying drawings.
  11. 11. Video signal processing apparatus substantially as hereinbefore described with reference to the accompanying drawings.
  12. 12. A method of video motion detection substantially as hereinbefore described with reference to the accompanying drawings.
GB9823400A 1998-10-26 1998-10-26 Video motion detection Expired - Fee Related GB2343317B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB9823400A GB2343317B (en) 1998-10-26 1998-10-26 Video motion detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB9823400A GB2343317B (en) 1998-10-26 1998-10-26 Video motion detection

Publications (3)

Publication Number Publication Date
GB9823400D0 GB9823400D0 (en) 1998-12-23
GB2343317A true GB2343317A (en) 2000-05-03
GB2343317B GB2343317B (en) 2003-02-26

Family

ID=10841301

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9823400A Expired - Fee Related GB2343317B (en) 1998-10-26 1998-10-26 Video motion detection

Country Status (1)

Country Link
GB (1) GB2343317B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006022705A1 (en) * 2004-08-10 2006-03-02 Thomson Licensing Apparatus and method for indicating the detected degree of motion in video
WO2007149247A3 (en) * 2006-06-16 2008-03-20 Raytheon Co Imaging system and method with intelligent digital zooming
ES2390316A1 (en) * 2010-02-26 2012-11-08 Enrique Caruncho Torga Improvements introduced to video-surveillance equipment. (Machine-translation by Google Translate, not legally binding)
ES2400759A1 (en) * 2010-09-06 2013-04-12 Enrique Caruncho Torga Video surveillance team (Machine-translation by Google Translate, not legally binding)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0263375A2 (en) * 1986-09-30 1988-04-13 Nippon Hoso Kyokai A method and apparatus for detecting the motion of image in a television signal

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0263375A2 (en) * 1986-09-30 1988-04-13 Nippon Hoso Kyokai A method and apparatus for detecting the motion of image in a television signal

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006022705A1 (en) * 2004-08-10 2006-03-02 Thomson Licensing Apparatus and method for indicating the detected degree of motion in video
US8624980B2 (en) 2004-08-10 2014-01-07 Thomson Licensing Apparatus and method for indicating the detected degree of motion in video
WO2007149247A3 (en) * 2006-06-16 2008-03-20 Raytheon Co Imaging system and method with intelligent digital zooming
US8054344B2 (en) 2006-06-16 2011-11-08 Raytheon Company Imaging system and method with intelligent digital zooming
ES2390316A1 (en) * 2010-02-26 2012-11-08 Enrique Caruncho Torga Improvements introduced to video-surveillance equipment. (Machine-translation by Google Translate, not legally binding)
ES2400759A1 (en) * 2010-09-06 2013-04-12 Enrique Caruncho Torga Video surveillance team (Machine-translation by Google Translate, not legally binding)

Also Published As

Publication number Publication date
GB9823400D0 (en) 1998-12-23
GB2343317B (en) 2003-02-26

Similar Documents

Publication Publication Date Title
JP2687670B2 (en) Motion detection circuit and image stabilization device
KR930002613B1 (en) Image motion vector detector
US5729290A (en) Movement detection device and focus detection apparatus using such device
US6693676B2 (en) Motion detecting apparatus for detecting motion information of picture within signal
JPH04255179A (en) Hand blur detector for video data
JP4249272B2 (en) High resolution electronic video enlargement apparatus and method
EP0556501B1 (en) Video motion detectors
JPH04151982A (en) Moving vector detector
JP2826018B2 (en) Video signal noise reduction system
GB2202706A (en) Video signal processing
GB2343317A (en) Video motion detection
WO1992007443A1 (en) Picture movement detector
GB2365646A (en) An image processor comprising an interpolator and an adaptable register store
JPH06237412A (en) Video processor
GB2343318A (en) Video motion detection
JPH06334994A (en) Adaptive motion interpolation signal generator using motion vector
JP2552060B2 (en) Method and apparatus for motion aperture correction
JPH01215185A (en) Contour compensation circuit
GB2343316A (en) Video interpolation
JPH07107368A (en) Image processor
JPH06237411A (en) Video processor
GB2360897A (en) Video motion detection
JPH0767025A (en) Video processor
JP3395186B2 (en) Video camera and video camera image vibration display method
JP2792767B2 (en) Imaging device

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20111026