GB2360897A - Video motion detection - Google Patents

Video motion detection Download PDF

Info

Publication number
GB2360897A
GB2360897A GB0007938A GB0007938A GB2360897A GB 2360897 A GB2360897 A GB 2360897A GB 0007938 A GB0007938 A GB 0007938A GB 0007938 A GB0007938 A GB 0007938A GB 2360897 A GB2360897 A GB 2360897A
Authority
GB
United Kingdom
Prior art keywords
image
signal
motion
video
difference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0007938A
Other versions
GB0007938D0 (en
Inventor
Matthew Patrick Compton
Stephen Mark Keating
Stephen John Forde
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Europe Ltd
Original Assignee
Sony United Kingdom Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony United Kingdom Ltd filed Critical Sony United Kingdom Ltd
Priority to GB0007938A priority Critical patent/GB2360897A/en
Publication of GB0007938D0 publication Critical patent/GB0007938D0/en
Publication of GB2360897A publication Critical patent/GB2360897A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Systems (AREA)

Abstract

A video motion detector arranged in operation to generate a motion signal representative of motion between images represented by a video signal. The video motion detector comprises a difference detection processor which is arranged in operation to generate a first image difference signal representative of a difference between a test area of a current image (t) of the video signal and a corresponding area of another image (t-2) of the video signal, and a second image difference signal representative of a difference between the test area of the current image (t) and the corresponding area of the other image (t-2) displaced by a predetermined amount in a predetermined direction, and a selection processor coupled to the difference detection processor which is arranged in operation to generate the motion signal from the first and the second image difference signals. The selection processor may generate the motion signal by selecting a minimum absolute value of the first and the second image difference signals, the motion detector being arranged to ignore motion corresponding to the displacement by the predetermined amount in the predetermined direction. This creates a 'dead-zone' in the motion detection. The motion detector finds application in detecting motion of images, which are processed by a video processor to reduce chromatic aberration distortion caused by an imaging lens, in dependence upon motion of the images represented by the video signal.

Description

2360897
VIDEO MOTION DETECTION Field of Invention
The present invention relates to video motion detectors and methods of detecting motion in video images, The present invention also relates to video signal processing apparatus and methods which operate to process video signals in accordance with motion of images represented by the video signal.
Background of Invention
It is known from co-pending UK patent applications serial numbers UK 9823400.8 and UK 9823401.6 to improve the quality of an image represented by a video signal by compensating for the effects of chromatic aberration. Typically the chromatic aberration is introduced by an imaging lens of, for example, a video camera which forms the image from which the video signal was generated. This improvement is achieved by interpolating between parts of the image represented within a field of the video signal and corresponding parts of the image represented in a different field.
As disclosed in these UK patent applications, interpolation can be performed using frame-based interpolation or field-based interpolation. With frame-based interpolation pixels from two or more fields are used to generate an output pixel, whereas with fieldbased interpolation pixels from only one field are used. Frame-based interpolation can provide better spatial resolution, because the interpolation is performed on two interlaced fields in which the sampling rate of the video signal is consistent with the bandwidth of the image frequencies made up from the two interlaced fields. As a result an interpolated image produced from frame-based interpolation does not usually suffer from errors introduced by vertical aliasing within the interpolated images. However, if there is motion present in the part of the image being interpolated, the use of two temporally separated fields (in the frame-based interpolation) can produce an inferior image quality because the image has moved between the fields. As explained in UK patent application No. UK 9823400.8, this can cause double imaging. In the event that there is motion present in the image, then field-based interpolation is preferred, although this can result in a reduction of the image quality as a result of
3 0 artefacts introduced by vertical aliasing. For this reason it is necessary to detect 2 motion of the parts of the image being interpolated and to switch between frame-based and field-based interpolation in dependence upon the detected motion.
Previously proposed motion detectors have detected motion by detecting the difference between a block of pixels from a field at time t and the a corresponding block from a field at time t-2, a technique referred to as a frame difference detection (the two test fields being separated by one frame). Figure 1 schematically illustrates such a motion detector.
The comparison is made between field signals of the same polarity to avoid the field interlace giving rise to problems with high vertical frequencies causing aliasing
Successive pairs of pixels at corresponding positions from fields t (i.e. a current field) and t-2 (i.e. the preceding field of the same polarity) are passed to a difference detector where the difference in luminance between each pair of pixels is detected. A low pass filter (LPF) 20 filters the output of the difference detector 10 to smooth out sudden changes in the difference detection.
At the output of the LPF 20, absolute value logic 30 detects an absolute value of the (positive or negative) difference values. The absolute difference values are then averaged over a block of 5 x 3 pixels by an averaging circuit 40. The averaged value is then compared with a threshold value by a comparator 50, to generate a motion detection signal 60.
If the averaged difference value is detected to be greater than a threshold value then the motion detection signal is set to a state indicating "motion". If the averaged difference value is less than the threshold value then the motion detection signal indicates "no motion". If the two signals are equal then the output state of the motion detection signal is set by convention. This description assumes that the averaged difference value increases numerically with increasing pixel difference, although of course the opposite polarity could be used for this signal.
Summary of Invention
According to the present invention there is provided a video motion detector arranged in operation to generate a motion signal representative of motion between images represented by a video signal, the video motion detector comprising a 3 difference detection processor which is arranged in operation to generate a first image difference signal representative of a difference between a test area of a current image of the video signal and a corresponding area of another image of the video signal, and a second image difference signal representative of a difference between the test area of the current image and the corresponding area of the other image displaced by a predetermined amount in a predetermined direction, and a selection processor coupled to the difference detection processor which is arranged in operation to generate the motion signal from the first and the second image difference signals.
A video motion detector according to embodiments of the present invention can respond to a particular rate and direction of motion rather than simply any motion above a rate of change of more than one pixel between a test area of the current image and the corresponding area of the other image. This is effected by providing a second image difference signal representative of a difference between the test area and the corresponding area, but shifted by a predetermined displacement corresponding to motion in a predetermined direction.
Advantageously in preferred embodiments, the selection processor may be arranged in operation to generate the motion signal by selecting a minimum absolute value of the first and the second image difference signals.
It has been discovered that although parts of an image may be moving it is advantageous to ignore particular types of motion. This is achieved by selecting a minimum absolute value of the first and the second image difference signals. This produces a so-called 'dead-zone' for motion detection. For the illustrative example described above for an interlaced video signal processed using either frame-based interpolation or fieldbased interpolation, it has been observed that frame-base interpolation still provides better image quality than field based interpolation when the parts of the image are moving substantially in a particular direction. For this example, the particular direction may substantially horizontal with respect to the orientation of the image. As such, a substantial improvement is provided to a motion detector in which the second image difference signal is generated from a comparison between a
3 0 test area of a current image and a corresponding area of another image offset by a predetermined displacement in the horizontal direction. The predetermined 4 displacement corresponds to a degree of movement in that direction, which is to be disregarded by the motion detector. This provides the 'dead zone' in the motion detection, in which although there is some movement of the part of the image, this movement is ignored by the motion detector.
Advantageously the second image difference signal may one of a plurality of image difference signals, each of which is representative of a difference between the test area of the current image and the corresponding area of the other image displaced by a different predetermined amount, the selection processor being arranged in operation to produce the motion signal by selecting a minimum absolute value of the first and the plurality of difference signals.
By providing a plurality of image difference signals at different predetermined displacements, the 'dead zone' introduced into the motion detector may be expanded and shaped in accordance with the requirements of a particular application.
Furthermore, each of the plurality of image difference signals may be paired with another of the plurality of image difference signals, each of the pair of image difference signals being generated by a displacement of the test area and the corresponding area by the same predetermined amount but opposite in direction. This provides the dead zone with the same relative movement in the predetermined direction whether the motion is in this direction or in a reverse direction. In preferred embodiments at least one of the image difference signals may be representative of a predetermined displacement corresponding to a fraction of a pixel.
In order to provide smooth detection of motion whereby sudden detection of an object is avoided in a case where the image accelerates slightly past a detection threshold, at least two of the plurality of image difference signals may be combined to form at least one composite image difference signal and the selection processor may be arranged in operation to produce the motion signal by selecting the minimum absolute value of at least one of the first image difference signal and the at least one composite signal. Furthermore one of the plurality of image difference signals forming the composite signal may correspond to a larger displacement representative of motion at boundaries of the dead-zone. In preferred embodiments one of the two image difference signals combined to form the composite signal may be the first image difference signal. which is not displaced. This provides the softer transition at boundaries of the dead-zone.
The video motion detector may comprise a frequency component analyser arranged to receive the test area of the current image of the video signal and the corresponding area of the other image of the video signal, and to generate a control sianal indicative of an amount of high frequency component in the predetermined C direction of displacement between the components of the current and the other image CJ for which the second image difference signal is formed, wherein the selection processor is coupled to the frequency component analyser and arranged in operation to adapt the second image difference signals in accordance with the control signal.
As already explained, embodiments of the present invention provide a particular advantage by introducing a dead zone into the motion detected, which is used for example to effect frame based interpolation even though there is some movement in the image in a predetermined direction. In preferred embodiments frame based interpolation is performed even if there is a small amount of horizontal motion.
However, it has been discovered that if there is a relatively high amount of detail in the image in the horizontal direction which is represented as a significant amount of horizontal high frequency components of the image, then although a small amount of motion in this direction is usually discounted by the dead-zone, it is preferable not to use frame base interpolation. This is because a double image produced by frame based interpolation when there is movement between images will be more likely to be noticed when there is a significant amount of detail in the direction of motion. A further improvement is therefore provided to the motion detector by detecting the high frequency component in the direction of motion within the dead zone and reducing the dead zone in accordance with the amount of high frequency component in this direction.
In a preferred embodiment the current image may correspond to a current field of the video signal and another image may correspond to another field of the video signal temporally separated by one frame, the difference detector being a frame-based 3 0 difference detector and the motion signal generated by the selection processor being a frame-based motion signal. Furthermore, the video motion detector may further 6 comprise a field-based difference detector arranged in operation to receive the video signal and to generate from the video signal a field-based motion signal and a combining means coupled to the selection processor and to the field-based difference detector and operable to form a composite motion signal from the field- based motion signal and the frame-based motion signal.
In accordance with an aspect of the present invention there is provided a video signal processing apparatus according to patent claim 1 According to a second aspect of the present invention there is provided a method of video motion detection according to patent claim 16.
Further respective aspects and features of the invention are defined in the appended claims.
Brief Description of the Drawings
Embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings in which:
Figure 1 is a schematic diagram of a previously proposed video motion detector; Figure 2 schematically illustrates a video camera apparatus; Figure 3 schematically illustrates a video processing apparatus; Figure 4 schematically illustrates a motion detector; Figure 5 schematically illustrates an image difference detection processor forming part of the motion detector of figure 4; and Figure 6 is a graphical representation of a dead-zone formed by the motion detector shown in figure 4.
Description of Preferred Embodiments
In co-pending UK patent application No. UK 9823)400.8 an improved motion detector is proposed, in which a field based motion detection signal is generated and combined with the frame-based motion detection signal. The field based motion detection signal is generated by comparing the difference in luminance between successive pairs of pixels at corresponding positions from fields t (a current field) and t-1 (the immediately preceding). The frame-based motion detection signal and the 7 field-based motion detection signal are then combined and compared with a predetermined threshold to generate the composite detection signal. The illustrative embodiment of the invention will be described by way of example with reference to this improved motion detector, although it will be appreciated that other motion detectors may be arranged in accordance with embodiments of the invention.
Figure 2 schematically illustrates a video camera apparatus comprising a camera body 200 and a lens 210. Inside the camera body are a CCD image pick Lip device 220, signal processing circuitry 230 for reading a video signal from the CCD image pick up device 220, and a lens aberration correction processor 240 for correcting aberrations caused by the particular lens in use.
The lens aberration correction processor 240 will be described in much greater detail below, but in summary the circuitry receives an aberration signal 250 defining distortions introduced by the lens 210 and applies corresponding corrections to the images from the image pick up device.
The aberration signal 250 from the lens 210 in fact comprises two lens output voltages: Vab, representing lateral chromatic aberration introduced by that lens and Vdi, representing the distortion introduced by that lens. The two lens output voltages are generated by circuitry within the lens in dependence on a current zoom, iris and/or focus setting, according to measurement data produced by the lens manufacturer indicating the expected lens aberrations at that particular zoom, iris and focus setting. Techniques for the generation and output of these lens aberration signals are well established in the art.
The two lens output voltages Vab, and Vdis are defined by the following equations:
Vbr = C(MM) x 72.7 (mV) + 2.5(V) where C is the aberration at 4Amm distance from the image centre, rnV represents a quantity in millivolts and V represents a quantity in volts; and:
no j Vdis = 60 x P + 2.5(V) 8 where d(nim) = (1.252 X y _ y3) X P x 4.4 Y(mm) = (distance from the image centre)/4.4(mrn) and d(mm) is the positional error (detlection) at Y mm from the image centre.
So, the lens aberration correction circuitry determines spatial distortions applied by the lens 210 to the image picked up by the image pickup device 220 and, using horizontal and vertical interpolation techniques applies corrections to at least reduce these distortions if not to restore the image to its undistorted form.
The video signal as corrected by the lens aberration correction processor 240 Is output for further signal handling, such as recording, transmission, mixing etc. It is also routed to a monitor screen 260 within the camera body 200, so that the currently picked up image can be viewed via a viewfinder 270.
Figure 3 schematically illustrates a video processing apparatus embodying the lens aberration correction processor 240. The processor of figure 3 receives a digital video signal from the signal processor 230 and the aberration signal from the lens 210, and generates an output video signal 280 to form the output from the camera apparatus.
The lens aberration correction processor comprises an error correction module 300, a horizontal interpolator 310, a vertical field-based interpolator 320, a vertical frame-based interpolator 330, frame and field delays 340, a motion detector 350 and an output mixer 360.
As mentioned above, the lens aberration signal comprises two lens output voltages Vab, and Vdi,. In order to make use of these in the circuitry for correcting lens introduced distortions, these output voltages have to be converted into pixel corrections to be applied at various positions in each field of the video signal.
Equations defining this technique will now be described. The equations make use of a number of variables to be listed below, and for each variable the value applicable to the 3 0 present embodiment is given in brackets:
9 SIZEX - no of pixels of CCD (1920) SIZEY - no of frame lines of CCD (1080) CCDX - horizontal size in mm of CCD (9.587) WDY - vertical size in min of CCD (5.382) 5 XPOS - horizontal position in pixels from centre of CCD YPOS - vertical position in frame lines from centre of CCD Define x, the distance from the centre of the CCD in min for XPOS pixels:
X(MM) = CCDX. XPOS SIZEX Define y, the distance from the centre of the CCD in mm for YPOS fame lines:
WDY Y(MM) = -. YPOS SIZEY In the error correction module 300, the lens output voltage Vdi, is supplied to a distortion corrector 302 and the lens Output Vabr is supplied to a chromatic aberration corrector ')04. These each generate values for the distortion (as a number of pixels of image displacement) in the horizontal (x) and vertical (y) image directions for each position within the image. The distortion signals are added together by an adder 3)06 to form an error signal 3 08 supplied to the horizontal interpolator 3 10, the vertical field based interpolator 3 20 and the vertical frame-based interpolator 333 0.
The distortion corrector 302 calculates the following equations using the variable P described earlier:
1.252 X2 + Y2 4.4P. x. ( 4.4 4.4 3 Xdistortion(pixels) CCDX /ly, I Z E X 1.2 5' X + Y_ 4.4 P. y. (- 4.4 4.4' Ydi s t: c r,,-,--n (p i xe I s) WDY SIZEY The chromatic aberration corrector 304 carries out the following calculations using the variable C defined earlier:
C. XPOS Xchromat.c(p-xe Is) - 4.4 C. YPOS Ychromatic(pixels) 4.4 So, the error signal 308 comprises an X component and a Y component giving t the horizontal and vertical distortion represented by the lens output voltages Vdj, and Vab, at each position in the image, measured in terms of pixels in the horizontal direction and lines in the vertical direction.
Although the example embodiment of the present invention is arranged to generate the error signal 308 from the lens output voltages Vdj, and Vab,, in other embodiments the error signal 308 may be derived from the video signal itself by analysing the colour components of the image represented by the video signal.
The horizontal interpolator 3 10 comprises a pixel delay and integer pixel shift circuit 312 and an 11 tap 64 sub-position interpolation filter 314, although larger filters could be used. These operate in a standard fashion to interpolate each pixel along each line of the image from other pixels. disposed along the same line, but applying any necessary horizontal shift to correct the horizontal component of the image distortion specified by the error signal 308.
So, for example, if the error signal 308 specifies that at a current image position, the lens has introduced a horizontal shift to the left of, say, 3.125 pixels, the horizontal interpolator 310 will interpolate a current output pixel based on an interpolation position 3.125 pixels to the right of the current output pixel's position.
11 Horizontal interpolation used in this manner for scaling images is well established in the art.
The particular interpolation filter used in this example is an 11 tap 64 sub position filter, which means that horizontal pixel shifts can be defined to sub-pixel accuracy, and in particular to an accuracy of one sixty-fourth of the separation between two adjacent pixels. Again, this technique is very well established in the art.
Accordingly, the horizontal interpolator 3) 10 outputs lines of pixels in which the horizontal component of distortion defined by the tens output voltages Vdis and Vabr has been corrected. These lines of pixels are supplied in parallel to the vertical field based interpolator 3 20 and the vertical frame-based interpolator 3).3 0.
The vertical interpolators 320, 330 act to correct the vertical component of the distortions specified by the lens output voltages Vdis and V,'b, in a roughly similar manner to the horizontal correction applied by the horizontal interpolator 3) 10. So, if the distortion specified by the lens output voltages at a particular position in the image is a vertical shift of, say, 7.375 lines upwards, the vertical interpolators will generate pixels at that image position by vertical interpolation about a centre position in the input field which is 7.375 lines below the position corresponding to the current output pixel.
Each of the vertical field-based interpolator and the vertical framebased interpolator comprises a series of first-in-first-out buffers (FIF0s) 322, 332 providing line delays for the interpolation process, a multiplexer 324, 334 providing integer fiel d line shifts and an 11 tap 32 sub-position interpolation filter 3)26, 3)3 6 to provide sub line accuracy in the interpolation process. In addition the vertical frame-based interpolator has a field delay 338 to provide two adjacent fields for use in the interpolation process.
The vertical field-based interpolator 3320 and the vertical frame-based interpolator 3) 0 operate in parallel to produce respective versions of each output pixel.
The selection of which version to use in forming the output video signal 280 is made by the output mixer 360 under the control of the motion detector 350.
0 The reason for this choice of the two versions of each output pixel is as follows. The vertical frame-based interpolator uses pixels from two adjacent fields to
12 generate output pixels by interpolation. Because the two adjacent fields are interlaced, this gives much better spatial resolution and avoids alias problems when compared to vertical field interpolation where pixels from only one output field are used. However, if there is motion present in the part of the image currently being interpolated, the use of two temporally separated fields (in the frame-based interpolator) will lead to an inferior output because the image will have moved between the two fields.
Accordingly, if motion is detected by the motion detector 350 in a subarea surrounding a current output pixel, that output pixel is selected to be the version produced by the field-based interpolator 320. Conversely, if motion is not detected in the sub-areas surrounding the current output pixel, that pixel is selected to be the version generated by the vertical frame-based interpolator 330. This selection is made by the output mixer 360 under the control of a motion signal 370 supplied by the motion detector 350.
The output mixer can work in another way, by operating a "soft threshold".
For example, if the threshold is x and a "degree of softness" is y, then for any decree of motion generated by the motion detector which is less than (x-y), the pixel from the frame-based interpolator will form the output pixel. If the de ree of motion is greater 9 Z^.> than (x+y) then the pixel from the field-based interpolator will be used. Between these two levels, the output mixer combines the two possible output pixels in a normalised additive combination so that in the range ((x-y) < degree of motion < (x + y)), the proportion of the frame-based pixel varies linearly with respect to degree of motion.
Figure 4 schematically illustrates the motion detector 350 in greater detail.
The motion detector 350 receives pixels of three temporally adjacent fields from the frame/field delays 340, namely a current field at a time t, the temporally preceding field t - 1 and the field preceding that, t 2. The motion detector comprises a frame-based motion detector 400, a field-based motion detector 410, a non additive mixer (NAM) 420 and a threshold comparator 430.
In preferred embodiments, the motion detector 350 operates to detect motions by two techniques in parallel, however the motion detector 350 may operate with a frame-based motion detector 400, only, or a field based motion detector only or as will be described in the illustrative example embodiment a combination of frame-based and
13 field-based motion detection. The frame-based motion detector 400 detects motion between corresponding blocks of pixels in two fields of the same polarity, namely field t and t - 2. The field-based motion detector 410 detects differences between corresponding blocks of pixels in two adjacent fields of opposite polarities, namely fields t and t - 1. After a scaling process (see below) the image differences detected by the two motion detectors 400, 410 are combined by the NAM 420 before being, compared to a threshold image difference 440 by the comparator 43 W.
If the comparator 430 detects that the output of the NAM 420 is Greater than the threshold 440, an output motion signal 450 is set to a state indicating that motion is present in the block surrounding the current output pixel. If the output of the NAM 420 is less than the threshold 440, the motion signal 450 is set to a state indicating that motion is not present in the block of pixels surrounding the current output pixel.
If the output of the NAM 420 is equal to the threshold 440, the setting applies to the motion signal 450 can be selected by convention. In the present embodiment, if the NAM output is equal to the threshold 440, it is considered by convention that motion is indeed present.
Considering first the frame-based motion detector, pixels of the two input fields t and t - 2 are supplied to a difference detection processor 401 and in parallel to a frequency component analyser 402. The frequency component analyser is shown to have two horizontal high pass filters 403, 40Y. Each high pass filter is fed with the pixels representative of the image within each of the two input fields t and t-2. The high pass filters 403, 40-31' are arranged to high pass filter the horizontal frequencies of the two input fields. The output of each filter is fed to a NAM 404 which is arranged to generate a control signal at an output 405 indicative of a level of high frequency horizontal components of the image, which represents an amount of horizontal detail in the block of the two input fields being processed.
The control signal 405 is formed so that as the high frequency component (a representation of image detail) of one or both of the input fields increases, the image difference as detected by the difference detection processor 401 can be controlled to 3 0 the effect that the 'dead zone' introduced by the difference detection processor 410 can be reduced when there is significant energy in the high frequencies of the horizontal 14components in either or both of the two input fields. If there is a significant amount of detail in the horizontal direction within the image, which is indicated by significant horizontal high frequency components, then frame-based interpolation should not be used. This is because double imaging which can be caused by frame based interpolation will be more likely to be noticed around this horizontal detail. The control signal 405 is fed to the difference detection processor 401, which uses the signal to generate a difference detection signal as will be described shortly.
The output of the difference detection processor 401 is supplied to a low pass filter 406 which filters the difference signal to smooth out sudden changes in the difference detection. The absolute difference values are then averaged over a block of 5 x 3 pixels by an averaging circuit 408, with the average being supplied to the NAM 420.
Turning to the field-based difference detector 410, the two fields are supplied in parallel to vertical low pass filters 411 which are arranged to remove high frequency components from the fields to avoid alias problems and to provide a fractional line shift to each of the two (opposite polarity) fields so that a proper comparison can be made by a difference detector 412. The filtered pixels are then supplied to the difference detector 412, a low pass filter 413 and an absolute value detector 414 as described above. The output of the absolute value detector 414 is multiplied by a scaling coefficient a before the scaled result is passed to an averaging circuit 415. The averaged output is supplied to the NAM 420.
The coefficient a can be adjusted by the user by means of an adjustment control on the camera body (not shown), so that the user can vary the relative response of the frame-based and field-based motion detectors.
The NAM 420 combines its two inputs so that the output of the NAM represents the greater of the two inputs.
The difference detection processor 401 according to the illustrative example embodiment of the present invention which is shown in figure 4 is shown in greater detail in figure 5. In figure 5 two input conductors 501, 502 receive the two input 3 0 fields of the same polarity for a current image t, and a previous image t-2 as correspondingly represented in figure 4. Each of the fields received on input channels
501, 502 are supplied via connecting channels 503 respectively to first and second inputs of each of eight displacement generators 504. Each of the displacement generators 504 operate to introduce a displacement offset in a horizontal direction by a predetermined amount. Furthermore each of the displacement generators 504 are paired, each pair introducing a horizontal offset between the two input fields by the same predetermined number of pixels but opposite in polarity. Thus, for example, the first and last displacement generators each introduce a horizontal displacement between the two input fields of + 2 pixels in the horizontal direction and -2 pixels in the horizontal direction, which is 2 pixels in the reverse horizontal direction, Each of the displacement generators 504 provides at a first output the first input field, and at a second output the second input field shifted with respect to the first input field by the amount set for that displacement generator. The result for each shifted combination is presented on the first and the second output channels from the displacement generators 504, and are received on corresponding input channels respectively by one of a plurality of absolute difference calculators 506. Furthermore the two input frames are also fed without passing through a displacement generator to a central absolute difference calculator 508. The absolute difference calculators 506, 508 generate an absolute value representative of the image difference in luminance signal values for the pixel or pixels within the test part of the image being compared in the first input frame and the corresponding part of the second input frame, but correspondingly shifted by the displacement generators 504.
The absolute difference calculators feed this absolute difference to a selection processor 512. However the first two and last two absolute difference calculators 506 feed the absolute difference result to the first input respectively of two combining circuits 514, 516. A second input of the combining circuits 514, 516 is fed with the absolute difference result from the central absolute difference calculator 508. The combining circuits 514, 516 operate 1 to combine the absolute image difference signals received on the first and second inputs in accordance with a predetermined proportion to form a composite image difference signal. For example the first and last combining circuits 514 serve to combine 30% of the absolute difference result from respectively the +2 pixel shift and the -2 pixel shift with a 70% combination from the zero 16 displacement difference signal from the central absolute difference calculator 508. The next two outer combining circuits 516 combine a 60% proportion of the absolute difference of the + 1 1/2pixel shift and the 1 1/2pixel shift with a 40% proportion of the absolute difference result of a zero displacement image shift signal. Each of the combining circuits 514, 516 produces a composite image difference signal which is fed to a selection processor 512. The selection processor 5 12 operates to select a minimum of each of the absolute image difference signals from the absolute difference calculators 506 and the composite image difference signals from the combining circuits 514, 516 and presents this minimum value at an output 520. It is this output 520 which is further fed to the multiplying 405 of the frame-based motion detector 400 shown in figure 5.
The difference detection processor 401 is arranged to introduce a dead zone into the motion detection signal produced at the output 520. This dead- zone is correspondingly introduced into the output of the frame-based motion detector 400.
This is because, if an image in part of the field being compared moves by an amount corresponding to the shift in pixels introduced by the shift generators 504 the motion detection signal resulting produces the same result as would occur if there was no motion of the image within that part of the two input fields. As a result a dead zone is produced in which, although an image in the part of the field being compared may in fact be moving, the frames based motion detection processor 401 will generate a result indicating that no motion is present. This is represented diagrammatically in figure 6.
Figure 6 provides a plot of vertical motion V with respect to horizontal motion H, given by a broken line 550 of motion of part of an image within the two input fields which would appear not to move. This is the dead zone produced by the framed-based motion detector resulting from the arrangement of the difference detection processor 401. The broken line 550 is representative of a region in which movement of an image in a horizontal direction H. or a vertical direction V will not generate a signal representative of motion. CorTespondingly, by comparison a second broken line 552 is representative of a boundary of a region in which the motion detector 400 would 0 produce a signal indicating no motion in the images represented by the video signal.
17 As can be appreciated therefore by selecting the predetermined shifts introduced by the shift generators 504, the dead zone of the frame-based motion detector can be expanded and shaped accordingly. As will be appreciated however a difference detection processor corresponding to that shown in figure 5 could form the difference detector 412 of the field-based motion detector 410 and as a result the dead zone in the field based motion detector would be correspondingly shaped and expanded.
As already mentioned in a preferred embodiment the dead-zone is adjusted in accordance with an amount of horizontal detail within the two input fields. To this end, the control signal 405 is received at an input to the selection processor 512, from an input 554. The selection processor 512 receives the control signal 405 and scales the difference signals generated at the outputs of the absolute difference calculators 506, and the combining circuits 514, 516 by an amount in proportion to the level of the control signal 405. The output from the central difference calculator 508 however is not scaled by the control signal 405. In effect, because the selection processor 512 is selecting the minimum of the inputs, scaling the difference signals in proportion with the control signal 405, has the effect of shrinking the 'dead-zone'. As such, if the control signal 405 is low, indicating a low level of horizontal detail in the two input fields, then the dead-zone is as it is shown by the boundary 550 in figure 6. However, if the control signal 405 is high, indicating significant horizontal detail in at least one of the input fields, then the minimum absolute difference signal becomes that produced by the central difference calculator 508. As such, the dead-zone is now reduced to that represented by the boundary line 552 in figure 6. In this way frame based interpolation will be performed if there is a small amount of horizontal movement, provided there is no horizontal detail in the image.
In summary, an aspect of the invention is a motion detector as illustrated and explained for the example embodiment, which operates in accordance with the process steps of (i) generating a first image difference signal representative of a difference between a test area of a current image of said video signal and a corresponding area of another image of the video signal, 18 (ii) generating a second image difference signal representative of a difference between the test area of the current image and the corresponding area of the other image displaced by a predetermined amount in a predetermined direction. and (ill) generating a motion detection signal from the first and the second image 5 difference signals.
Accordingly the step (iii) may comprise the step of selecting a minimum absolute value of the first and the second image difference signals as a representation of the motion detection signal. The step (11) may comprise the step of generating a plurality of image difference signals, each of which is representative of a difference between the test area of the current image and the corresponding area of the other image displaced by a different predetermined amount, and the step (iii) may comprise the step of selecting a minimum absolute value of the first and the plurality of difference signals.
As will be appreciated although the frame-based motion detector 400 includes the frequency component analyser 402 in order to ameliorate the effect of high horizontal frequency components in the two input fields, it will be understood that the difference detection processor 401 according to the example embodiment of the present invention and the frame-based motion detector 400 could operate without this arrangement. Furthermore the motion detector 350 could be formed with the frame- based motion detector 400 alone.
As will be appreciated by those skilled in the art various modifications may be made to the example embodiment described without departing from the scope of the present invention. In particular, the difference detection processor 401 as described is arranged to form and shape a dead- zone as illustrated with reference to figure 6.
However it will be understood that this is a non-limiting example embodiment in that the difference detection processor forming part of the motion detector could be arrange to produce an output indicative of nonmotion for any degree of motion between the parts of the images under test in any predetermined direction. Furthermore, it will be appreciated that the motion detector finds application in detecting motion in video W signals and applied to video processing system of other applications.
19

Claims (34)

1 A video motion detector arranged in operation to generate a motion signal representative of motion between ima es represented by a video signal, said video 9 motion detector comprising - a difference detection processor which is arranged in operation to generate - a first image difference signal representative of a difference between a test area of a current image of said video signal and a corresponding area of another image of said video signal, and - a second image difference signal representative of a difference between said test area of said current imaae and said corresponding area of said another image t> t tn displaced by a predetermined amount in a predetermined direction, and - a selection processor coupled to said difference detection processor which 'Is arranged in operation to generate said motion signal from said first and said second image difference signals.
2. A video motion detector as claimed in Claim 1, wherein said motion signal is generated by selecting a minimum absolute value of said first and said second image difference signals.
A video motion detector as claimed in Claim 2, wherein said second image difference signal is one of a plurality of image difference signals, each of which is representative of a difference between said test area of said current image and said corresponding area of said another image displaced by a different predetermined amount, said selection processor being arranged in operation to produce said motion signal by selecting a minimum absolute value of said first and said plurality of difference signals.
4. A video motion detector as claimed in Claim 3, wherein each of said plurality of image difference signals is paired with another of said plurality of image difference signals, each of said pair of image difference signals being displaced by the same predetermined amount but opposite in direction.
5. A video motion detector as claimed in Claims J3) or 4, wherein at least one of said image difference signals is representative of a predetermined displacement corresponding to a fraction of a pixel.
6. A video motion detector as claimed in any of Claims 3), 4, or 5, wherein at least two of said plurality of image difference signals are combined to form at least one composite signal, and said selection processor is arranged in operation to produce said motion signal by selecting the minimum absolute value of at least one of said image difference signals and said at least one composite signal.
7. A video motion detector as claimed in Claim 6, wherein said at least one composite signal is formed by combining a predetermined fraction of one of said at least two images difference signals and a predetermined fraction of another of said predetermined difference signals.
8. A video motion detector as claimed in any preceding Claim, wherein said selection processor compares said selected minimum absolute value of said image difference signals with a threshold, said motion signal being representative of a result of said comparison.
9. A video motion detector as claimed in any preceding Claim, wherein said predetermined displacement is a displacement in a predetermined direction with respect to the orientation of said current and said another image.
10. A video motion detector as claimed in Claim 9, comprising a frequency component analyser arranged to receive said test area of said 30 current image of said video signal and said corresponding area of said another image of said video signal, and to generate a control signal indicative of an amount of high 21 frequency component in said predetermined direction of displacement of said components of said current and said another image from which said second image difference signal is formed, wherein said selection processor is coupled to said frequency component analyser and arranged in operation to adapt said second image difference signal in accordance with said control signal.
11. A video motion detector as claimed in any preceding Claim, wherein said orientation of said displacement is horizontal, with respect to said current and said another image.
12. A video motion detector as claimed in any preceding Claims, wherein said current image corresponds to a current field of said video signal and said another image corresponds to another field of said video si nal temporally separated by one 0 9 frame, said difference detector being a frame based difference detector and said motion signal generated by said selection processor being a frame based motion signal.
113. A video motion detector as claimed in Claim 12, comprising - a field based difference detector arranged in operation to receive said video signal and to generate from said video signal a field based motion signal, and - a combiner means coupled to said selection processor and said field based difference detector and operable to form a composite motion signal from said field based motion signal and said frame based motion signal.
14. Video signal processing apparatus for processing images of an input video signal, the apparatus comprising a motion detector according to any one of the preceding claims, the motion detector being arranged to detect whether motion is present in each of a plurality of image areas in images of the video signal, and a video signal processor for processing images of the video signal, the video signal processor having at least a first and a second mode of operation, the first or the second mode of operation being selected for processing each image area in response to 22 a detection by the motion detector of a degree of image motion present in that image area.
15. Video signal processing apparatus as claimed in Claim 14, wherein video signal processor is arranged in said first mode to process said each image area in accordance with a frame based interpolation, and in said second mode to process said each image area in accordance with field based interpolation.
C
16. Video camera apparatus comprising - an image pickup device for generating a video signal, - a lens arrangement for focusing light onto the image pickup device, the lens being operable to generate a lens aberration signal in response at least to a current focus and/or zoom setting, and a video signal processor according to claims 14 or 15, the video signal processor being operable to process the video signal in accordance with the lens aberration signal.
17. A method of video motion detection comprising the steps of generating a first image difference signal representative of a difference between a test area of a current image of said video signal and a corresponding area of another image of said video signal, - generating a second image difference signal representative of a difference between said test area of said current image and said corresponding area of said another image displaced by a predetermined amount in a predetermined direction, and - generating a motion detection signal from said first and said second image difference signals.
18. A method of video motion detection as claimed in Claim 17, wherein the step of generating said motion detection signal comprises the step of 0 selecting a minimum absolute value of said first and said second image difference signals as a representation of said motion detection signal.
23
19. A method of video motion detection as claimed in Claim 18, wherein the step of generating said second image difference signal comprises the step of - generating a plurality of image difference signals, each of which is representative of a difference between said test area of said current image and said corresponding area of said another image displaced by a different predetermined amount, and the step of selecting a minimum absolute value of said first and said second image difference signals comprises the step of - selecting a minimum absolute value of said first and said plurality of difference signals.
20. A method of video motion detection as claimed in Claim 19, wherein each of said plurality of image difference signals is paired with another of said plurality of image difference signals, each of said pair of image difference signals being displaced C1 by the same predetermined amount but opposite in direction.
21. A method of video motion detection as claimed in any of Claims 19 or 20, wherein at least one of said image difference signals is representative of a predetermined displacement corresponding to a fraction of a pixel.
22. A method of video motion detection as claimed in any of Claims 19, 20 or 2 comprising the step of - combining at least two of said plurality of image difference signals to form at least one composite signal, wherein the step of selecting a minimum absolute value of said first and said plurality of difference signals comprises the step of - selecting the minimum absolute value of at least one of said first and said at least one composite signal.
23. A method of video motion detection as claimed in Claim 22, wherein the step 0 of combining at least two of said plurality of image difference signals to form said at least one composite signal, comprises the step of 24 combining a predetermined fraction of one of said at least two images z: difference signals and a predetermined fraction of another of said predetermined difference signals.
24. A method of video motion detection as claimed in any of Claims 17 to 23, comprising the step of comparing said selected minimum absolute value of said image difference signals with a threshold, and - generating said motion signal in dependence upon the result of said comparison.
25. A method of video motion detection as claimed in any of Claims 17 to 24, wherein said predetermined displacement is a displacement in a predetermined direction with respect to the orientation of said current and said another image.
26. A method of video motion detection as claimed in Claim 25, comprising the steps of _ analysing said test area of said current image of said video signal and said corresponding area of said another image of said video signal, - generating a control signal indicative of an amount of high frequency component in said predetermined direction of displacement between said components of said current and said another image used to form said second image difference signal, and - adapting said second image difference signals in accordance with said control signal.
27. A method of video motion detection as claimed in Claim 25 or 26, wherein said orientation of said displacement is horizontal, with respect to said current and said another image.
0
28. A method of video motion detection as claimed in any of Claims 17 to 27, wherein said current image corresponds to a current field of said video signal and said another image corresponds to another field of said video signal temporally separated by one frame, said selected minimum absolute value of said image difference signals 5 being representative of frame based video motion signal.
29. A method of video motion detection as claimed in Claim 28, comprising, - generating from said video signal a field based motion signal, and combining said frame based video motion signal and said field based video motion signal to form a composite motion signal to represent said video motion.
30. A method of processing a video signal, said method comprising the steps of - detecting motion present in each of a plurality of image areas in images of said video signal according to the method claimed in any of claims 17 to 29, - processing said image areas of images of said video signal in accordance with at least a first and a second mode of operation, wherein the first or the second mode of operation is selected for processing each image area in response to the step of detecting motion present in said areas.
3 1. A method of processing a video signal as claimed in Claim 30, wherein said first mode comprises the step of - processing said each image area in accordance with a frame based interpolation, and said second mode comprises the step of processing said each image area in accordance with field based interpolation. 25
32. A computer program providing computer executable instructions, which when loaded onto a computer configures the computer to operate as a video motion detector as claimed in any of Claims 1 to 12.
26 3 3). A computer program providing computer executable instructions. which when loaded on to a computer causes the computer to perform the method accordin(l to Claims 17 to 31.
34. A computer program product having a computer readable medium recorded thereon information signals representative of the computer program claimed in any of Claims ')2 or ')3.
3 5. A video motion detector substantially as herein before described with reference 10 to Figure 4, 5 and 6 of the accompanying drawings.
) 6. Video camera apparatus substantially as herein before described with reference to Figures 2 to 6 of the accompanying drawings.
3) 7. Video signal processing apparatus substantially as herein before described with reference to Figures 2 to 6 of the accompanying drawings.
3) 8. A method of video motion detection substantially as herein before described with reference to Figure 4, 5 and 6 of the accompanying drawings.
GB0007938A 2000-03-31 2000-03-31 Video motion detection Withdrawn GB2360897A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB0007938A GB2360897A (en) 2000-03-31 2000-03-31 Video motion detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0007938A GB2360897A (en) 2000-03-31 2000-03-31 Video motion detection

Publications (2)

Publication Number Publication Date
GB0007938D0 GB0007938D0 (en) 2000-05-17
GB2360897A true GB2360897A (en) 2001-10-03

Family

ID=9888916

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0007938A Withdrawn GB2360897A (en) 2000-03-31 2000-03-31 Video motion detection

Country Status (1)

Country Link
GB (1) GB2360897A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1520411A1 (en) * 2002-05-21 2005-04-06 Alcon RefractiveHorizons, Inc. Image deinterlacing system for removing motion artifacts and associated methods

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2229603A (en) * 1987-09-25 1990-09-26 British Telecomm Motion estimator
WO1993016556A1 (en) * 1992-02-08 1993-08-19 Samsung Electronics Co., Ltd. Method and apparatus for motion estimation
GB2309135A (en) * 1996-01-11 1997-07-16 Samsung Electronics Co Ltd Estimating image motion by comparing adjacent image frame signals

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2229603A (en) * 1987-09-25 1990-09-26 British Telecomm Motion estimator
WO1993016556A1 (en) * 1992-02-08 1993-08-19 Samsung Electronics Co., Ltd. Method and apparatus for motion estimation
GB2309135A (en) * 1996-01-11 1997-07-16 Samsung Electronics Co Ltd Estimating image motion by comparing adjacent image frame signals

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1520411A1 (en) * 2002-05-21 2005-04-06 Alcon RefractiveHorizons, Inc. Image deinterlacing system for removing motion artifacts and associated methods
EP1520411B1 (en) * 2002-05-21 2008-05-28 Alcon RefractiveHorizons, Inc. Image deinterlacing system for removing motion artifacts and associated methods

Also Published As

Publication number Publication date
GB0007938D0 (en) 2000-05-17

Similar Documents

Publication Publication Date Title
JP3103894B2 (en) Apparatus and method for correcting camera shake of video data
US8355442B2 (en) Method and system for automatically turning off motion compensation when motion vectors are inaccurate
KR20050059407A (en) Method and apparatus for image deinterlacing using neural networks
JP4575431B2 (en) Protection with corrected deinterlacing device
US7688385B2 (en) Video signal processing apparatus and method
EP1460847B1 (en) Image signal processing apparatus and processing method
WO2003055211A1 (en) Image signal processing apparatus and processing method
JPH08205181A (en) Chromatic aberration correcting circuit and image pickup device with chromatic aberration correcting function
EP1599042A2 (en) Image processing device and image processing method
GB2365646A (en) An image processor comprising an interpolator and an adaptable register store
JP2687974B2 (en) Motion vector detection method
GB2360897A (en) Video motion detection
GB2343317A (en) Video motion detection
GB2343316A (en) Video interpolation
US6385250B1 (en) Image processing apparatus and image processing method
JP2624507B2 (en) Motion compensated telecine device
GB2343318A (en) Video motion detection
JP2007288483A (en) Image converting apparatus
JP3121519B2 (en) Motion interpolation method and motion interpolation circuit using motion vector, and motion vector detection method and motion vector detection circuit
JP3696551B2 (en) Method of multiplying picture frequency of image sequence generated by interlace method by 2
US8243196B2 (en) Motion adaptive image processing
JPH05252486A (en) Scanning converter for video signal
JP3013898B2 (en) Motion interpolation method using motion vector in TV signal
JPS62175080A (en) Motion correcting device
JPH06315140A (en) System converter for video signal

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)