GB2312806A - Motion compensated video signal interpolation - Google Patents

Motion compensated video signal interpolation Download PDF

Info

Publication number
GB2312806A
GB2312806A GB9715329A GB9715329A GB2312806A GB 2312806 A GB2312806 A GB 2312806A GB 9715329 A GB9715329 A GB 9715329A GB 9715329 A GB9715329 A GB 9715329A GB 2312806 A GB2312806 A GB 2312806A
Authority
GB
United Kingdom
Prior art keywords
motion
pixel
output image
input images
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB9715329A
Other versions
GB2312806B (en
GB9715329D0 (en
Inventor
Morgan William Amos David
Simon Matthew Manze
Martin Rex Dorricott
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Europe Ltd
Original Assignee
Sony United Kingdom Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony United Kingdom Ltd filed Critical Sony United Kingdom Ltd
Priority to GB9715329A priority Critical patent/GB2312806B/en
Publication of GB9715329D0 publication Critical patent/GB9715329D0/en
Publication of GB2312806A publication Critical patent/GB2312806A/en
Application granted granted Critical
Publication of GB2312806B publication Critical patent/GB2312806B/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0135Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
    • H04N7/014Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes involving the use of motion vectors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/223Analysis of motion using block-matching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • H04N5/145Movement estimation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0117Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving conversion of the spatial resolution of the incoming video signal
    • H04N7/012Conversion between an interlaced and a progressive signal

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Television Systems (AREA)
  • Color Television Systems (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

In a motion compensated video signal processing system (Figure 1) a pair of pixel interpolators receive both the selected motion vector and a flag indicating motion. One interpolator is employed when there is little or no motion and weights equally by a factor 0.5 the blocks of video data used for interpolation, 760 - 790. Furthermore the input video data is intra-field filtered by static coefficients 720 to interpolate additional pixels (610, 620, Figure 7) prior to inter-field interpolation. When the motion flag indicates motion, the video data from two fields is weighted by factors t and (1 - t) according to the temporal position of the output field and the intra-field weighting is by a different set of motion coefficients.

Description

MOTION COMPENSATED VIDEO SIGNAL PROCESSING This invention relates to motion compensated video signal processing.
Motion compensated video signal processing is used in applications such as television standards conversion, film standards conversion and conversion between video and film standards.
In a motion compensated television standards converter, such as the converter described in the British Published Patent Application number GB-A-2 231 749, pairs of successive input images are processed to generate sets of motion vectors representing image motion between the pair of input images. The processing is carried out on discrete blocks of the images, so that each motion vector represents the inter-image motion of the contents of a respective block.
Each set of motion vectors is then supplied to a motion vector reducer which derives a subset of the set of motion vectors for each block. The subset is then passed to a motion vector selector which assigns one of the subset of motion vectors to each picture element (pixel) in each block of the image. The selected motion vector for each pixel is supplied to a motion compensated interpolator which interpolates output images from the input images, taking into account the motion between the input images.
This invention provides a motion compensated video signal processing apparatus in which motion vectors are generated to represent image motion between a pair of input images from which an output image is to be derived by motion compensated interpolation, the apparatus comprising: means for detecting correlation between test blocks of the input images pointed to by a zero motion vector indicative of substantially zero image motion; and means for detecting whether the correlation between the test blocks pointed to by the zero motion vector exceeds a predetermined correlation threshold, thereby detecting whether the pixel of the output image is part of a moving or a stationary portion of the output image.
The invention provides a means of detecting whether each pixel of the output image represents a moving or a substantially stationary part of the output image. This information can be used in, for example, adaption of the interpolation method used to generate that pixel, or in later down conversion of the output image.
In order that the detection of whether the pixel of the output image is part of a moving or stationary portion of the image can be communicated to other parts of the apparatus, it is preferred that the apparatus comprises means for generating a motion flag, associated with each pixel of the output image, indicating whether that pixel is detected to be part of a moving or a stationary portion of the output image.
In order to reduce the complexity of the apparatus, the detection of motion of the output pixel is preferably performed as part of a motion vector selection process.
To this end, it is preferred that the apparatus comprises means for generating a plurality of motion vectors for each pixel of the output image, the plurality of motion vectors including the zero motion vector; means for testing the motion vectors, to select a motion vector for use in interpolation of a pixel of the output image, comprising means for detecting correlation between test blocks of the input images pointed to by a motion vector under test; and means for selecting, from the plurality of motion vectors, that motion vector having the highest correlation between the test blocks pointed to by that motion vector.
In a preferred embodiment the apparatus comprises a motion compensated interpolator for generating each pixel of the output image from a respective pair of input images according to the motion vector selected for use in interpolation of that pixel.
The detection of motion of the output pixel can be used to alter the operation of the interpolator. In particular, it is preferred that the motion compensated interpolator comprises: two pixel interpolators; and means for selecting one of the two pixel interpolators, for interpolation of a pixel of the output image, in dependence on whether that pixel is detected to be part of a moving or a stationary portion of the output image.
If, for example, the input images comprise successive fields of an interlaced video signal, vertical information is lost if the image moves vertically by an odd number of video lines between successive fields. This situation could be detected by detecting an odd vertical component in the motion vector selected for interpolation of an output pixel; however, this would lead to an artificial (and subjectively disturbing) distinction between vertical image motion of an odd and an even number of lines. It is therefore preferred that the distinction is made between moving and non-moving pixels, using the detection system defined above. In this case, for use when image motion is not detected, it is preferred one of the pixel interpolators comprises: means for interleaving pixels from the two input images, to generate an interleaved block of pixel values; and means for generating a pixel of the output image by intra-block interpolation of the block of interleaved pixel values. Similarly, for use when image motion is detected, it is preferred that one of the pixel interpolators comprises: means for generating two respective intermediate pixel values by intra-image interpolation of each of the pair of input images; and means for combining the two intermediate pixel values to generate a pixel of the output image.
Preferably the means for combining is operable to combine the two intermediate pixel values according to a combining ratio dependent upon the temporal position of the output image with respect to the pair of input images. In a preferred embodiment, a linear relationship is used so that the combining ratio is proportional to the temporal position of the output image with respect to the pair of input images.
It is preferred that the apparatus operates on pairs of input images comprises successive fields of an interlaced input video signal, and that the output image comprises a field of an interlaced output video signal. The use of interlaced input images avoids the need for progressive scan conversion of an input video signal, thus greatly reducing the processing required.
Viewed from a second aspect this invention provides a motion compensated video signal processing apparatus in which an output image is derived by motion compensated interpolation in response to a detection of image motion between a pair of input images, the apparatus comprising: a plurality of motion compensated pixel interpolators; and means for selecting one of the plurality of pixel interpolators, for interpolation of a pixel of the output image from the pair of input images, in response to the detection of image motion between the pair of input images.
Apparatus according to the invention is particularly advantageously employed in a television standards conversion apparatus.
Viewed from a third aspect this invention provides a motion compensated video signal processing apparatus in which an output image is derived by motion compensated interpolation in response to a detection of image motion between a pair of input images, the apparatus comprising: a plurality of motion compensated pixel interpolators; and means for selecting one of the plurality of pixel interpolators, for interpolation of a pixel of the output image from the pair of input images, in response to a detection of image motion between the pair of input images.
Viewed from a fourth aspect this invention provides a method of motion compensated video signal processing, in which an output image is derived by motion compensated interpolation in response to a detection of image motion between a pair of input images, the method comprising the step of selecting one of a plurality of pixel interpolators, for interpolation of a pixel of the output image from the pair of input images, in response to the detection of image motion between the pair of input images.
An embodiment of the invention will now be described, by way of example only, with reference to the accompanying drawings, throughout which like parts are referred to by like references, and in which: Figure 1 is a schematic block diagram of a motion compensated television standards conversion apparatus; Figure 2 is a schematic diagram illustrating the operation of a motion vector selector; Figure 3 is a schematic block diagram of a motion vector selector; Figure 4 is a schematic diagram of a progressively scanned video frame; Figures 5a and 5b are schematic diagrams of two interlaced video fields; Figures 6a and 6b are schematic diagrams of two interlaced video fields; Figure 7 schematically illustrates the operation of a motion compensated interpolator when a motion flag is set; Figure 8 schematically illustrates the operation of a motion compensated interpolator when a motion flag is not set; and Figure 9 is a schematic block diagram of a motion compensated interpolator.
Figure 1 is a schematic block diagram of a motion compensated television standards conversion apparatus. The apparatus receives an input interlaced digital video signal 50 (e.g. an 1125/60 2:1 high definition video signal (HDVS)) and generates an output interlaced digital video signal 60 (e.g a 1250/50 2:1 HDVS).
The input video signal 50 is first supplied to an input buffer/packer 110. In the case of a conventional definition input signal, the input buffer/packer 110 formats the image data into a high definition (16:9 aspect ratio) format, padding with black pixels where necessary. For a HDVS input the input buffer/packer 110 merely provides buffering of the data.
The data are passed from the input buffer/packer 110 to a matrix circuit 120 in which (if necessary) the input video signal's format is converted to the standard "CCIR recommendation 601 " (Y,Cr,Cb) format. From the matrix circuit 120 the input video signal is passed to a time base changer and delay 130, and via a subsampler 170 to a subsampled time base changer and delay 180. The time base changer and delay 130 determines the temporal position of each field of the output video signal, and selects the two fields of the input video signal which are temporally closest to that output field for use in interpolating that output field. For each field of the output video signal, the two input fields selected by the time base changer are appropriately delayed before being supplied to an interpolator 140 in which that output field is interpolated. A control signal t, indicating the temporal position of each output field with respect to the two selected input fields, is supplied from the time base changer and delay 130 to the interpolator 140.
The subsampled time base changer and delay 180 operates in a similar manner, but using spatially subsampled video supplied by the subsampler 170. Pairs of input fields are selected by the subsampled time base changer and delay 180 from the subsampled video, to be used in the generation of motion vectors.
The time base changers 130 and 180 can operate according to synchronisation signals associated with the input video signal, the output video signal, or both. In the case in which only one synchronisation signal is supplied, the timing of fields of the other of the two video signals is generated deterministically within the time base changers 130, 180.
The pairs of fields of the subsampled input video signal selected by the subsampled time base changer and delay 180 are supplied to a motion processor 185 comprising a direct block matcher 190, a data stripper 200, a motion vector estimator 210, a motion vector reducer 220, a motion vector selector 230 and a motion vector post-processor 240. The pairs of input fields are supplied first to the direct block matcher 190 which calculates correlation surfaces representing the spatial correlation between search blocks in the temporally earlier of the two selected input fields and (larger) search areas in the temporally later of the two input fields. Data representing these correlation surfaces are reformatted by the stripper 200 and are passed to the motion vector estimator 210. The motion vector estimator 210 detects points of greatest correlation in the correlation surfaces. (The correlation surfaces actually represent the difference between blocks of the two input fields; this means that the points of maximum correlation are in fact minima on the correlation surface, and are referred to as "minima"). In order to detect a minimum, additional points on the correlation surfaces are interpolated, providing a degree of compensation for the loss of resolution caused by the use of subsampled video to generate the surfaces. From the detected minimum on each correlation surface, the motion vector estimator 210 generates a motion vector which is supplied to the motion vector reducer 220.
The motion vector estimator 210 also performs a confidence test on each generated motion vector to establish whether that motion vector is significant above the general noise level, and associates a confidence flag with each motion vector indicative of the result of the confidence test. The confidence test, known as the "threshold" test, is described (along with other features of the apparatus of Figure 1) in GB-A-2 231 749.
A test is also performed by the motion vector estimator 210 to detect whether each vector is aliased. In this test, the correlation surface (apart from an exclusion zone around the detected minimum) is examined to detect the next lowest minimum.
If this second minimum does not lie at the edge of the exclusion zone, the motion vector derived from the original minimum is flagged as being potentially aliased.
The motion vector reducer 220 operates to reduce the choice of possible motion vectors for each pixel of the output field, before the motion vectors are supplied to the motion vector selector 230. The output field is notionally divided into blocks of pixels, each block having a corresponding position in the output field to that of a search block in the earlier of the selected input fields. The motion vector reducer compiles a group of four motion vectors to be associated with each block of the output field, with each pixel in that block eventually being interpolated using a selected one of that group of four motion vectors.
Vectors which have been flagged as "alias" are re-qualified during vector reduction if they are identical to non-flagged vectors in adjacent blocks.
As part of its function, the motion vector reducer 220 counts the frequencies of occurrence of "good" motion vectors (i.e. motion vectors which pass the confidence test and the alias test, or which were re-qualified as non-aliased), with no account taken of the position of the blocks of the input fields used to obtain those motion vectors. The good motion vectors are then ranked in order of decreasing frequency. The most common of the good motion vectors which are significantly different to one another are then classed as "global" motion vectors. Three motion vectors which pass the confidence test are then selected for each block of output pixels and are supplied, with the zero motion vector, to the motion vector selector 230 for further processing. These three selected motion vectors are selected in a predetermined order of preference from: (i) the motion vector generated from the corresponding search block; (ii) those generated from surrounding search blocks ("local" motion vectors); and (iii) the global motion vectors.
The motion vector selector 230 also receives as inputs the two input fields which were selected by the subsampled time base changer and delay 180 and which were used to calculate the motion vectors. These fields are suitably delayed so that they are supplied to the motion vector selector 230 at the same time as the vectors derived from those fields. The motion vector selector 230 supplies an output comprising one motion vector per pixel of the output field. This motion vector is selected from the four motion vectors for that block supplied by the motion vector reducer 220.
The vector selection process (described below in greater detail) involves detecting the degree of correlation between test blocks of the two input fields pointed to by a motion vector under test. The motion vector having the greatest degree of correlation between the test blocks is selected for use in interpolation of the output pixel. A "motion flag" is also generated by the vector selector. This flag is set to "static" (no motion) if the degree of correlation between blocks pointed to by the zero motion vector is greater than a preset threshold.
The vector post-processor reformats the motion vectors selected by the motion vector selector 230 to reflect any vertical scaling of the picture, and supplies the reformatted vectors to the interpolator 140. Using the motion vectors, the interpolator 140 interpolates an output field from the corresponding two (nonsubsampled) interlaced input fields selected by the time base changer and delay 130, taking into account any image motion indicated by the motion vectors currently supplied to the interpolator 140.
If the motion flag indicates that the current output pixel lies in a moving part of the image, pixels from the two selected fields supplied to the interpolator are combined in relative proportions depending on the temporal position of the output field with respect to the two input fields (as indicated by the control signal t), so that a larger proportion of the nearer input field is used. If the motion flag indicates is set to "static" then temporal weighting is not used. The output of the interpolator 140 is passed to an output buffer 150 for output as a HDVS output signal, and to a downconverter 160 which generates a conventional definition output signal 165, using the motion flag.
The subsampler 170 performs horizontal and vertical spatial subsampling of the input fields received from the matrix 120. Horizontal subsampling is a straightforward operation in that the input fields are first pre-filtered by a halfbandwidth low pass filter (in the present case of 2:1 horizontal decimation) and alternate video samples along each video line are then discarded, thereby reducing by one half the number of samples along each video line.
Vertical subsampling of the input fields is complicated by the fact that the input video signal is interlaced. This means that successive lines of video samples in each interlaced field are effectively separated by two video lines of the complete frame, and that the lines in each field are vertically displaced from those in the preceding or following field by one video line of the complete frame. The method of vertical subsampling actually used involves a first stage of low pass filtering in the vertical direction (to avoid aliasing), followed by a filtering operation which effectively displaces each pixel vertically by half a video line downwards (for even fields) or upwards (for odd fields). The resulting displaced fields are broadly equivalent to progressively scanned frames which have been subsampled vertically by a factor of two.
Figure 2 is a schematic diagram illustrating the operation of the motion vector selector 230.
As mentioned above, the motion vector selector 230 receives local and global motion vectors from the motion vector reducer 220, the two subsampled input fields from which the motion vectors have been generated, and the control signal t from the time base changer and delay 180. For each pixel 300 in the current output field 310, the motion vector selector 230 tests each of the four possible motion vectors for that pixel by comparing test blocks 340 of pixels pointed to by that motion vector in each of the preceding and following input fields 320, 330.
The comparison of the test blocks 340 is made by calculating the sum of absolute luminance differences between corresponding pixels in the two blocks, with a lower sum indicating a higher correlation between the blocks. This comparison is performed for each of the four motion vectors supplied to the motion vector selector 230; however, for clarity of the diagram, only two of the four motion vectors (the zero motion vector 350 and another, non-zero motion vector 360) are shown in Figure 2.
Figure 3 is a schematic block diagram of the motion vector selector 230.
Four motion vectors for each block of output pixels are supplied to the motion vector selector 230 by the motion vector reducer 220. These four motion vectors, namely the zero motion vector and three other vectors referred to as vl, v2 and v3, are supplied to four respective processing units 400, 410, 420 and 430. Each of the processing units 400, 410, 420 and 430 comprises an address offset calculator 440, two random access memories (RAMs) 450, 460, each storing a relevant portion of a respective one of the pair of input fields selected by the time base changer and delay 180, and a block matcher and comparator 470.
In each of the processing units, the address offset calculator 440 receives the motion vector for that processing unit along with the temporal offset control signal (t) generated by the time base changer and delay 180. From these two values, the address offset calculator generates a plurality of memory addresses for accessing test blocks of pixels, stored in the RAMs 450, 460, which are pointed to by that motion vector. In response to the addresses supplied by the address offset calculator, each of the RAMs 450, 460 supplies an array of pixel values representing a test block to the block matcher and comparator 470.
The block matcher and comparator 470 compares pixel values (in particular the luminance component of the pixel values) at corresponding positions in the two test blocks. The absolute luminance difference between each pair of pixels is calculated and a sum of all of the absolute luminance differences (SAD) is generated to indicate the overall correlation between the two test blocks. A low SAD value for a particular motion vector indicates a high degree of correlation between the test blocks pointed to by that motion vector.
The processing unit 430 receives the motion vector v3 and calculates the SAD value for test blocks pointed to by that motion vector. The processing unit 430 then supplies the SAD value and an identifier of the vector v3 to the processing unit 420.
In the processing unit 420 the SAD value for the vector v2 is generated by the respective block matcher and comparator 470. The SAD value for v2 is then compared with the SAD value for v3 received from the processing unit 430. The lower of these two SAD values represents the motion vector (v2 or v3) having the higher degree of correlation between test blocks pointed to by that motion vector.
Accordingly, the processing unit 420 outputs the lower of the SAD values 475 and a vector identifier 480 identifying the vector from which the lowest SAD value was derived.
The processing unit 410 receives the vector vl and calculates a SAD value from that vector. The SAD value for vl is then compared with the lowest SAD value 475 of the vectors v2 and v3, as supplied by the processing unit 420. The processing unit 410 then outputs the lowest SAD value of the vectors vl, v2 and v3, along with an identifier of the vector from which that SAD value was generated.
The processing unit 400 generates a SAD value for the zero motion vector and compares this with the current lowest SAD value for the vectors vl, v2 and v3 received from the processing unit 410. From this comparison, a selected vector identifier 490, indicating that one of the four motion vectors for which dXe lowest SAD value was generated, is output by the block matcher and comparator 470 in the processing unit 400.
The SAD value for the zero motion vector (generated by the block matcher and comparator 470 in the processing unit 400) is supplied to a comparator 500 in which it is compared with a preset threshold value 510. The SAD value for the zero motion vector is supplied to the comparator 500 regardless of which of the four motion vectors (zero, vl, v2, v3) was selected for use in interpolation of the output pixel 300.
The comparator 500 generates a motion flag 520 which is "set" (indicating image motion) if the SAD value for the zero motion vector is greater than the threshold 510. If the SAD value for the zero motion vector is less than the threshold 510 then the motion flag is not set, thereby indicating that the current output pixel lies in a substantially stationary portion of the picture.
As mentioned above, the operation of the interpolator 140 varies according to whether the motion flag is "set" for the output pixel being interpolated. The reason for this variation will now be described with reference to Figures 4, 5a, Sb, 6a and 6b.
Figure 4 is a schematic diagram of part of a progressively scanned video frame showing a diamond pattern. When the picture is represented as two successive interlaced video fields, as shown in Figure 5a (an odd video field) and Figure Sb (the next even video field), no vertical information is lost because the two video fields can be recombined to generate the entire diamond pattern.
However, when there is vertical image motion between the two successive video fields by an odd number of lines in the vertical direction, information is lost by the interlace process. This situation is illustrated in Figures 6a and 6b, where Figure 6a represents an odd video field derived from the pattern shown in Figure 4, and Figure 6b represents the next (even) video field after the image has moved vertically by one line. The fields shown in Figure 6a and 6b are identical; in other words, half of the picture information has been lost in this case.
In order to overcome the problem described above, the interpolator 140 switches between two modes of operation in dependence on whether the motion flag is set. If the motion flag is set, indication that the current output pixel forms part of a moving portion of the image, then two intermediate pixel values are generated by intra-field interpolation of each of the pair of input fields. These intermediate values are then combined to generate the output pixel. In contrast, if the motion flag is not set, indicating that the current output pixel does not form part of a moving portion of the image, then pixels from the two input fields are interleaved and a single filtering operation performed on the interleaved pixels to generate the output pixel.
These two modes of operation are illustrated in Figures 7 and 8.
Figure 7 schematically illustrates the operation of the interpolator when the motion flag is set. A half-bandwidth interpolation filter 600 is separately applied to each of the two input fields 320, 330 to generate to respective intermediate pixel values 610, 620. A half-bandwidth filter is employed to avoid vertical alias problems.
The two intermediate pixel values 610, 620 are then combined to generate the output pixel value, according to a temporal weighting such that a larger proportion of the intermediate pixel from the field temporally nearest to the current output field is used. In fact a combining ratio is used which varies linearly according to the temporal position (t) of the output field between the two input fields. This means that if t has a value of between 0 and 1, indicating the temporal separation between the output field and the preceding input field as a fraction of one input field period, then the two intermediate pixels are combined as follows: output pixel = (l-t) . intermediate pixel from preceding input field + t . intermediate pixel from following input field Figure 8 schematically illustrates the operation of the interpolator 140 when the motion flag is not set. In this case, it has been determined that the current output pixel does not form part of a moving portion of the image and accordingly the full vertical bandwidth is available (as illustrated in Figures 5a and 5b). A full bandwidth interpolation filter 630 is applied to interleaved pixels of the two input fields 32Q, 330 to generate the output pixel.
Although it would be possible to use temporal weighting in the filtering arrangement shown in Figure 8, it is not actually employed in the present embodiment, for a number of reasons: (i) the filter would have to be specially modified to retain unity overall gain in spite of variations in the weighting of the two sets of pixels. Although this modification would be possible, the extra constraint on the filter design would mean that the filter performance (e.g. bandwidth) would be compromised; (ii) if temporal weighting was used, then as the temporal position of the output field approached that of one of the two input fields, the filter woald be dominated by one half of the pixel values. The effect of this would be to halve the bandwidth of the filter; and (iii) the bandwidth red interpolation of the output pixel, in dependence on the state of the motion flag. The selected set of coefficients is passed, via a multiplexer 750, to the interpolators 700, 710.
Each of the interpolators 700, 710 combines a number of pixels from the respective input field according to the selected set of filter coefficients. The output of the interpolators are then multiplied by weighting coefficients in two multipliers 760, 770, the outputs of which are combined by an adder 800. A weighting coefficient of either 0.5 or t (the temporal position of the output field) is supplied to the multiplier 760 by a multiplexer 780, under the control of the motion flag.
Similarly, a weighting coefficient of either 0.5 or (1-t) is supplied to the multiplier 770 by a multiplexer 790, again under the control of the motion flag.
If weighting coefficients of t and (1-t) are employed, in conjunction with the "motion" set of coefficients, then the interpolator 140 operates according to the filtering arrangement of Figure 7, in that two intermediate pixels are generated and combined using temporal weighting. If, however, the "static" coefficients are employed with respective weightings of 0.5, this is equivalent to the interleaved, full bandwidth, filtering arrangement of Figure 8. The interpolator 140 therefore selectively acts as one of two different interpolators, in dependence on the motion flag.
Various refinements of the above embodiment can reduce the potential problem that moving objects coming to rest are suddenly brought into focus when the full bandwidth filter 630 is switched in to operation. For example, (i) a cross-fade could be performed between two output fields, one produced using the "static" (full bandwidth) interpolator and one produced using the "motion" (half bandwidth) interpolator; or (ii) one or two intermediate bandwidth filters (between full and half bandwidth) could be employed in succession after the motion has settled to zero, before the full bandwidth filter is used.

Claims (18)

1. Motion compensated video signal processing apparatus in which an output image is derived by motion compensated interpolation in response to a detection of image motion between a pair of input images, the apparatus comprising: a plurality of motion compensated pixel interpolators; and means for selecting one of the plurality of pixel interpolators, for interpolation of a pixel of the output image from the pair of input images, in response to a detection of image motion between the pair of input images.
2. Motion compensated video signal processing apparatus in which motion vectors are generated to represent image motion between a pair of input images from which an output image is to be derived by motion compensated interpolation, the apparatus comprising: means for detecting correlation between test blocks of the input images pointed to by a zero motion vector indicative of substantially zero image motion; and means for detecting whether the correlation between the test blocks pointed to by the zero motion vector exceeds a predetermined correlation threshold, thereby detecting whether the pixel of the output image is part of a moving or a stationary portion of the output image.
3. Apparatus according to claim 2, comprising means for generating a motion flag, associated with each pixel of the output image, indicating whether that pixel is detected to be part of a moving or a stationary portion of the output image.
4. Apparatus according to claim 2 or claim 3, comprising: means for generating a plurality of motion vectors for each pixel of the output image, the plurality of motion vectors including the zero motion vector; means for testing the motion vectors, to select a motion vector for use in interpolation of a pixel of the output image, comprising means for detecting correlation between test blocks of the input images pointed to by a motion vector under test; and means for selecting, from the plurality of motion vectors, that motion vector having the highest correlation between the test blocks pointed to by that motion vector.
5. Apparatus according to claim 4, comprising a motion compensated interpolator for generating each pixel of the output image from a respective pair of input images according to the motion vector selected for use in interpolation of that pixel.
6. Apparatus according to claim 5, in which the motion compensated interpolator comprises: two pixel interpolators; and means for selecting one of the two pixel interpolators, for interpolation of a pixel of the output image, in dependence on whether that pixel is detected to be part of a moving or a stationary portion of the output image.
7. Apparatus according to claim 6, in which one of the pixel interpolators comprises: means for interleaving pixels from the two input images, to generate an interleaved block of pixel values; and means for generating a pixel of the output image by intra-block interpolation of the block of interleaved pixel values.
8. Apparatus according to claim 6 or claim 7, in which one of the pixel interpolators comprises: means for generating two respective intermediate pixel values by intra-image interpolation of each of the pair of input images; and means for combining the two intermediate pixel values to generate a pixel of the output image.
9. Apparatus according to claim 8, in which the means for combining is operable to combine the two intermediate pixel values according to a combining ratio dependent upon the temporal position of the output image with respect to the pair of input images.
10. Apparatus according to claim 9, in which the combining ratio is proportional to the temporal position of the output image with respect to the pair of input images.
11. Apparatus according to any one of the preceding claims, in which the pair of input images comprises two successive fields of an interlaced input video signal.
12. Apparatus according to any one of the preceding claims, in which the output image comprises a field of an interlaced output video signal.
13. Television standards conversion apparatus comprising apparatus according to any one of the preceding claims.
14. A method of motion compensated video signal processing, in which motion vectors are generated to represent image motion between a pair of input images from which an output image is to be derived by motion compensated interpolation, the method comprising the steps of: detecting the correlation between test blocks of the input images pointed to by a zero motion vector indicative of substantially zero image motion; and detecting whether the correlation between the test blocks pointed to by the zero motion vector exceeds a predetermined correlation threshold, thereby detecting whether the pixel of the output image is part of a moving or a stationary portion of the output image.
15. A method of motion compensated video signal processing, in which an output image is derived by motion compensated interpolation in response to a detection of image motion between a pair of input images, the method comprising the step of selecting one of a plurality of pixel interpolators, for interpolation of a pixel of the output image from the pair of input images, in response to the detection of image motion between the pair of input images.
16. Motion compensated video signal processing apparatus substantially as hereinbefore described with reference to the accompanying drawings.
17. A method of motion compensated video signal processing, the method being substantially as hereinbefore described with reference to the accompanying drawings.
18. Television standards conversion apparatus substantially as hereinbefore described with reference to the accompanying drawings.
GB9715329A 1993-04-08 1993-04-08 Motion compensated video signal processing Expired - Fee Related GB2312806B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB9715329A GB2312806B (en) 1993-04-08 1993-04-08 Motion compensated video signal processing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB9307473A GB2277004B (en) 1993-04-08 1993-04-08 Motion compensated video signal processing
GB9715329A GB2312806B (en) 1993-04-08 1993-04-08 Motion compensated video signal processing

Publications (3)

Publication Number Publication Date
GB9715329D0 GB9715329D0 (en) 1997-09-24
GB2312806A true GB2312806A (en) 1997-11-05
GB2312806B GB2312806B (en) 1998-01-07

Family

ID=10733640

Family Applications (2)

Application Number Title Priority Date Filing Date
GB9715329A Expired - Fee Related GB2312806B (en) 1993-04-08 1993-04-08 Motion compensated video signal processing
GB9307473A Expired - Fee Related GB2277004B (en) 1993-04-08 1993-04-08 Motion compensated video signal processing

Family Applications After (1)

Application Number Title Priority Date Filing Date
GB9307473A Expired - Fee Related GB2277004B (en) 1993-04-08 1993-04-08 Motion compensated video signal processing

Country Status (2)

Country Link
JP (1) JP3469626B2 (en)
GB (2) GB2312806B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19509418A1 (en) * 1995-03-16 1996-09-19 Thomson Brandt Gmbh Method and circuit arrangement for subsampling in motion estimation
JP4119092B2 (en) * 1998-06-25 2008-07-16 株式会社日立製作所 Method and apparatus for converting the number of frames of an image signal
EP1075147A1 (en) * 1999-08-02 2001-02-07 Koninklijke Philips Electronics N.V. Motion estimation
KR100467625B1 (en) * 2003-02-03 2005-01-24 삼성전자주식회사 Image processing apparatus and method using frame-rate conversion
JP5347874B2 (en) * 2009-09-28 2013-11-20 富士通株式会社 Image processing apparatus, image processing method, and program
US8405769B2 (en) * 2009-12-22 2013-03-26 Intel Corporation Methods and systems for short range motion compensation de-interlacing

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2280811A (en) * 1993-08-03 1995-02-08 Sony Uk Ltd Motion compensated video signal processing

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2045574B (en) * 1979-03-22 1983-04-20 Quantel Ltd Video movement detection
GB2231749B (en) * 1989-04-27 1993-09-29 Sony Corp Motion dependent video signal processing
JP2507138B2 (en) * 1990-05-21 1996-06-12 松下電器産業株式会社 Motion vector detection device and image shake correction device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2280811A (en) * 1993-08-03 1995-02-08 Sony Uk Ltd Motion compensated video signal processing

Also Published As

Publication number Publication date
GB2312806B (en) 1998-01-07
JP3469626B2 (en) 2003-11-25
GB2277004B (en) 1998-01-07
GB9715329D0 (en) 1997-09-24
JPH0795593A (en) 1995-04-07
GB9307473D0 (en) 1993-06-02
GB2277004A (en) 1994-10-12

Similar Documents

Publication Publication Date Title
US5943099A (en) Interlaced-to-progressive conversion apparatus and method using motion and spatial correlation
US5600377A (en) Apparatus and method for motion compensating video signals to produce interpolated video signals
AU643565B2 (en) pideo image processing
US5070403A (en) Video signal interpolation
US5631706A (en) Converter and method for converting video signals of interlace format to video signals of progressive format
US5526053A (en) Motion compensated video signal processing
JP2832927B2 (en) Scanning line interpolation apparatus and motion vector detection apparatus for scanning line interpolation
GB2263601A (en) Motion compensated video signal processing
US5386248A (en) Method and apparatus for reducing motion estimator hardware and data transmission capacity requirements in video systems
US5485224A (en) Motion compensated video signal processing by interpolation of correlation surfaces and apparatus for doing the same
JP3619542B2 (en) Motion correction video signal processing apparatus and method
GB2263602A (en) Motion compensated video signal processing
GB2312806A (en) Motion compensated video signal interpolation
US5442409A (en) Motion vector generation using interleaved subsets of correlation surface values
JPH06326980A (en) Movement compensating type processing system of picture signal
GB2308774A (en) Selection of global motion vectors for video signal processing
GB2277006A (en) Generating motion vectors; subsampling video signals, interpolating correlation surfaces
GB2281167A (en) Motion compensated video signal processing
GB2277003A (en) Determining the motion of regular patterns in video images
GB2276999A (en) Motion compensated video signal processing; detecting "ridge" motion
JP2770300B2 (en) Image signal processing
JP4346327B2 (en) Progressive scan conversion apparatus and progressive scan conversion method
JP2938677B2 (en) Motion compensation prediction method
EP0474272A1 (en) Method and apparatus for reducing motion estimator hardware and data transmission capacity requirements in video systems
GB2230914A (en) Video signal interpolation

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20060408