GB2264416A - Motion compensated video signal processing - Google Patents

Motion compensated video signal processing Download PDF

Info

Publication number
GB2264416A
GB2264416A GB9203306A GB9203306A GB2264416A GB 2264416 A GB2264416 A GB 2264416A GB 9203306 A GB9203306 A GB 9203306A GB 9203306 A GB9203306 A GB 9203306A GB 2264416 A GB2264416 A GB 2264416A
Authority
GB
United Kingdom
Prior art keywords
output
pixels
input
frame
video signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB9203306A
Other versions
GB2264416B (en
GB9203306D0 (en
Inventor
John William Richards
Morgan William Amos David
Tsuneo Morita
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Broadcast and Communications Ltd
Sony Europe BV United Kingdom Branch
Original Assignee
Sony Broadcast and Communications Ltd
Sony United Kingdom Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Broadcast and Communications Ltd, Sony United Kingdom Ltd filed Critical Sony Broadcast and Communications Ltd
Priority to GB9203306A priority Critical patent/GB2264416B/en
Publication of GB9203306D0 publication Critical patent/GB9203306D0/en
Publication of GB2264416A publication Critical patent/GB2264416A/en
Application granted granted Critical
Publication of GB2264416B publication Critical patent/GB2264416B/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/523Motion estimation or motion compensation with sub-pixel accuracy
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • H04N5/145Movement estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/253Picture signal generating by scanning motion picture films or slide opaques, e.g. for telecine
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/93Regeneration of the television signal or of selected parts thereof
    • H04N5/95Time-base error compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0112Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level one of the standards corresponding to a cinematograph film standard

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Systems (AREA)

Abstract

A method of processing a video signal to compensate for distortion resulting from a skew mismatch in the temporal characteristics of the image source 2 and the intended display device 3 comprises producing an output video signal from the input video signal by motion compensated temporal interpolation. The process is such that pixels in an interpolated output field or frame are each derived from pixels in a pair of input fields or frames, the input pixels being spatially displaced from the position of the output pixel by amounts dependent upon the skew mismatch of the image source 2 and the display device 3. An output signal having a different frame rate to the input signal may be produced if required. <IMAGE>

Description

DIGITAL VIDEO SIGNAL PROCESSING The present invention relates to processing of digital video signals.
Images, both moving and still, can be acquired from a variety of different sources such as film cameras, video cameras and CCD imagers, and can also be generated by computer graphics. The images may then be processed and displayed by an equally wide variety of display devices such as CRTs, LCDs and film projectors.
Different image acquisition and display devices exhibit different temporal characteristics. The temporal characteristics of an acquisition or display device depend on the way in which each field or frame of the image is acquired or displayed over time. While images of still scenes are unaffected by these characteristics, for images of moving scenes, if the temporal characteristics of the acquisition and display device are not matched, the displayed image will appear to be geometrically distorted or skewed. This problem can be understood by considering Figures 1 and 2 of the accompanying drawings.
Figure 1 shows an example of a temporally mismatched system. An image of a moving object 1 is acquired by a shuttered CCD camera 2 and displayed on a CRT display 3. One field of the image is acquired each time the shutter of the camera 2 is opened, an array of image cells being exposed to the scene viewed. The accumulated image charge is shifted to a protected area during blanking by the shutter (shuttering being effected by preventing accumulation of charge for a portion of the field interval). The image data is then read out at video rate.
All points in each field of the image are therefore acquired over substantially the same time interval, ie the interval for which the image cells are exposed. Since all points are sampled at the same time, even though the object 1 is moving horizontally from left to right, a "frozen" field of the acquired image, as indicated in the inset block in Figure 1, will be geometrically correct. However, when the acquired image is displayed on the CRT display 3, the displayed image will appear to be distorted as shown in the figure.
The CRT 3 displays the image by an interlaced raster scan system.
As successive fields are displayed, the object appears to the eye to be distorted or skewed as shown on the CRT 3 for the following reason.
If the image had been acquired by a device with a temporally identical raster scan system to the CRT 3, then each pixel in each field of the image would have been acquired at a different time. Since the object 1 is moving from left to right in the figure, the horizontal positions of the pixels in each field corresponding to the left hand edge of the object, for example, would be shifted to the right by successively greater amounts from the top to the bottom of the image.
Overall, in a frozen field of the acquired image, the object would appear to lean in the opposite direction to the image shown on the CRT 3 in Figure 1. Thus, when a moving square is displayed on a raster scan display, the eye expects each complete field of the display to depict a square skewed in this manner. When, as in the system of Figure 1, each field or frame actually depicts a geometrically correct square, the eye accounts for the time difference between display of the different parts of the square, and interprets the image as shown on the CRT display 3. Thus, the skew mismatch in the temporal characteristics of the CCD camera 2 and the CRT 3 results in geometric distortion of the displayed image.
Figure 2 shows an example of a temporally matched system ie a system in which there is no skew mismatch in the temporal characteristics of the image source and the display device. In Figure 2, the object 1, again moving from left to right, is viewed by a tube video camera 4 and the image is displayed on the CRT 3. The tube camera 4 is "read" on a raster scan basis at the same rate as the raster scan of the CRT display 3. In this case, a "frozen" field of the image will depict a square skewed to the left, as shown in the inset block in Figure 2, for the reasons previously described.
However, when displayed on the CRT 3, the eye accounts for the time difference between display of different parts of the image and interprets the displayed image as a geometrically correct square as shown on the CRT 3 in the figure.
The skew characteristics of film cameras can be considered to be very similar to those of the shuttered CCD camera described above with reference to Figure 1. All points in each frame of an image acquired by a film camera are acquired over substantially the same time interval. (Due to the way in which the shutter traverses the film, all points are not acquired over exactly the same time interval so there will be a degree of temporal offset between different parts of each frame.) Thus, when film images are processed for display on a raster scan basis, the skew mismatch in temporal characteristics will result in skewing of the viewed image as before. (An additional complication is introduced here where the display operates on an interlaced raster scan basis, since additional distortion (double imaging) of moving images will be perceived where each even/odd field pair displayed is derived from the same frame of film such that the image data in the fields of each pair is not temporally offset in the source image.) Similar problems to those described above arise in the display of images from other sources such as computer graphics and animation material. For example, animation material is usually made up of a succession of shots of a static scene, the content of the scene being moved after each shot. Since the scene is completely static, there is no temporal offset between different parts of each frame of the image.
Such image sources therefore simulate "idealised sampling", namely sampling of a moving scene in such a manner that there is no temporal offset between the image data within each field/frame of the image.
General purpose computer graphics also simulate idealised sampling.
Displayed images derived from such sources will appear to be distorted in dependence upon the temporal characteristics of the display device.
It will be understood from the above that geometric distortion of moving images may be perceived wherever there is a skew mismatch in the temporal characteristics of the image source and the display device.
It is, however, frequently desirable to process an image for display by a device where there is such a skew mismatch between the image source and display device. A common example of this is photographic film to video conversion. In an example of this process for 24Hz film material, the 24Hz 1:1 format frames are first captured. by a high definition digital film scanner and the output video signal is recorded on tape. The subsequent processing involves format conversion of the image signal from 24 frames/s progressive scan format to 60 fields/s 2:1 interlace format. This involves a change in the frame rate, and in order to ensure smooth motion portrayal in the displayed video image, a cor,version technique involving "motion compensated temporal interpolation" is utilised. The technique of motion compensated temporal interpolation is described in detail in United Kingdom patent applications nos GB2231228A and 9024836.0, the contents of which are incorporated herein by reference. The essence of this technique when used in the above conversion process is as follows. Motion is detected in areas of the image between pairs of temporally adjacent 24 frames/s progressive scan frames. Output fields are then produced at 60 fields/s, ten for every four of the 24 frames/s frames.Two in every ten of the output fields may be derived from a single input frame, but each pixel of each of the other output fields is derived from pixels in a respective pair of the 24 frames/s frames spatially displaced from the output pixel position in dependence upon the detected motion and the temporal misalignment between the output field and the pair of frames from which it is formed. Thus, during the conversion process, output fields are generated at new temporal output sample sites, the temporal offset with respect to the input frames of all pixels in a given output field being constant. That is to say, no account is taken of the temporal offset in display of the individual pixels in a given field by a raster scan display. Since no account is taken of the skew mismatch in temporal characteristics, the displayed image will appear to be distorted as previously described.
According to a first aspect of the present invention there is provided a method of processing an input digital video signal to compensate for a skew mismatch in the temporal characteristics of the image source and the intended display device, the method comprising producing an output video signal from the input video signal by motion compensated temporal interpolation such that at least some of the output pixels in each interpolated output field or frame are each derived from input pixels in a pair of input fields or frames, the input pixels being spatially displaced from the position of the output pixel by amounts dependent upon the skew mismatch of the image source and the display device.Thus, after detecting the motion of parts of the image, by comparing pairs of input frames for example, the temporal offsets with respect to the input frames of pixels in a given output field/frame are set in dependence upon the skew mismatch in the temporal characteristics of the image source and display device.
The ratio of the spatial displacements between the position of the output pixel and the said input pixels may change progressively with successive lines of pixels in each output field or frame. For example, consider the case where a raster scan display device is to be used to display an image from a source which simulates idealised sampling, or an image acquired by a device, such as a film camera or a CCD camera, which acquires all points in each field or frame of the image over substantially the same time interval. An acceptable representation of the image may be produced without accounting for the temporal offset between individual pixels in the same line of the display, but merely the temporal offset between successive lines of the display.In this case, for each said output pixel, the said input pixels are spatially displaced from the position of the output pixel by amounts dependent upon the time interval between display of the raster line corresponding to the output pixel and the beginning of the corresponding field or frame of the display. However, it may be desirable for the ratio of the spatial displacements between the position of the output pixel and the said input pixels to change progressively along each line of pixels as well as with successive lines of pixels in each output field or frame.Considering again the example of display on a raster scan basis of an image from a source which simulates idealised sampling or from a film or CCD camera, then for each said output pixel the said input pixels may be spatially displaced from the position of the output pixel by amounts dependent upon the time interval between display of the output pixel and the beginning of the corresponding field or frame. In other words, the temporal offset of each individual output pixel with respect to the input fields/frames is set in dependence on the temporal characteristics of the display. Each interpolated output frame is a simulation of the frame which would have been acquired by a raster scan acquisition device with identical temporal characteristics to the display.This process therefore counteracts the skew mismatch between the image source and display so that the image appears to be geometrically correct to the eye.
According to another aspect of the invention there is provided a method of processing an input digital video signal, in which pixels of output fields or frames are temporally interpolated between pairs of input fields or frames using motion compensated temporal interpolation, characterised in that at least some of the pixels of an output field or frame are temporally interpolated with a different interpolation ratio to other pixels of that output field or frame. As previously described, the interpolation ratio, ie the ratio of the spatial or temporal displacements of each output pixel from the input pixels from which it is derived, varies within a given output field or frame in dependence upon the skew mismatch between the source and display.
For the purpose of the subsequent processing, and for improved vertical resolution in the output frames, it may be desirable for the said input signal to be a progressive scan format signal. Where the image is obtained from a progressive format source such as film, the video signal derived from the source may be utilised directly as the input video signal. However, where the image is acquired by, for example, a CCD camera, the image data in successive acquired fields being temporally offset in the source image, it may be desirable for a progressive scan format signal to be produced by interpolation from fields of the original signal, and for the progressive scan format signal to be utilised as the input video signal. The process of progressive scan conversion is described in detail in UK patent applications nos GB2231228A and 9024836.0 referred to above.Briefly, however, the process involves producing progressive scan frames, at the same rate as the original fields, each either from one or from three of the original fields.
It may of course be necessary to produce an output video signal with a different frame rate to the input video signal. For example, if an image derived from 24 frames/s film is to be displayed by a 60 fields/s display device, then frame rate conversion will be required.
This can be done during the motion compensated interpolation process in the manner described in detail in UK patent application no 9024836.0.
In such cases, the output fields/frames will be temporally offset from the input frames from which they are formed in addition to the temporal offset of pixels within each output frame required for skew mismatch correction.
The value of an output pixel is obtained from a combination of the values of pixels in the corresponding pair of input frames. In a simple case, the output pixel value may be determined as half the sum of the values of the two appropriate pixels in the pair of input frames. (In fact, in each input frame, the average pixel value of a patch of pixels around a given input pixel may be used, rather than a single input pixel value.) However, for greater accuracy and to reduce the effect of any errors, the output pixel value may be a weighted combination of the input pixel, or pixel patch, values. The ratio of the weighting factors may be the inverse of the interpolation ratio and may vary progressively within each output field or frame in accordance with the variation of the interpolation ratio.
Preferred embodiments of the invention will now be described, by way of example, with reference to the accompanying drawings in which: Figure 1 is a schematic representation of a mis-matched source and display system; Figure 2 is a schematic representation of a matched source and display system; Figure 3 is a block diagram of processing apparatus embodying the invention; Figure 4 is a more detailed block diagram of part of the apparatus of Figure 3; Figure 5 illustrates diagrammatically the operation of part of the apparatus of Figure 3 during a processing operation carried out by the apparatus of Figures 3 and 4; Figure 6 illustrates schematically the temporal relationship between input and output frames during a simplified version of the processing operation of Figure 5;; Figure 7 illustrates schematically the temporal relationship between input and output frames during a different processing operation to that of Figures 5 and 6; and Figures 8 and 9 are diagrams related to Figures 6 and 7 respectively, showing in each case how the value of a given output pixel is derived.
The apparatus of Figure 3 comprises a high definition digital VTR 10, a skew corrector 11, a frame recorder 12, and a second high definition digital VTR 13 connected as shown. A television monitor 14 is associated with the VTR 13. The elements 10 to 13 are under the control of a system controller 15.
The operation of the skew corrector 11 is such that it produces an output at less than real time video rate and thus it is convenient for the VTR 10 to be capable of reproducing the recorded video signal at less than normal video speed. Alternatively, the VTR 10 may be operated in burst mode, its output fields being supplied to a further frame recorder (not shown) where they are temporarily stored. The stored frames are then output to the skew corrector 11 at an appropriate rate under the control of the system controller 15. If the VTR 13 is capable of recording in slow motion, then in theory the frame recorder 12 is not required. However, in practice the frame recorder 12 may be useful since it facilitates intermittent operation of the apparatus and periodic evaluation of the results by the system controller 15 and on the monitor 14.The frame recorder 12 also enables processed video signals output at a variety of rates to be recorded on standard equipment.
In operation, image data from the image source is converted, if necessary, to digital video format and recorded on video tape. For example, film images may be converted to video format using "flying spot" or "linear CCD" telecines. The recorded video signal is reproduced by the VTR 10 and applied at the input to the skew corrector 11 which processes the signal using motion compensated temporal interpolation to produce a "skew corrected" output signal in which the skew mismatch in the temporal characteristics of the image source and intended display device has been compensated for. This signal is output to the frame recorder 12 and then recorded by the VTR 13. If necessary, for example when converting from 24Hz film to 60 fields/s video, the skew corrector 11 produces the output signal with a different frame rate to the input signal.
Figure 4 is a block diagram of the skew corrector 11 shown in Figure 3. The skew corrector 11 comprises an input 20 which receives a signal from the VTR 10. The input 20 is connected to a progressive scan converter 21 for producing progressive scan frames from input fields in the case where the fields of each frame are temporally offset. Where the signal recorded by the VTR 10 is derived from a progressive format source, there will be no temporal offset between the pairs of fields supplied to the input 20 and the progressive scan converter 21 may be bypassed or operated in previous field replacement mode. In either case, progressive scan frames are supplied as the input signal to a time-base corrector 22 where they are temporarily stored so that two progressive scan frames are available at a time.
These frames are supplied by the time-base corrector 22 to a motion processor generally indicated at 23. The motion processor 23 comprises a direct block matcher 24 which compares the contents of blocks of pixels in the input frames supplied by the time-base corrector 22 and produces correlation surfaces representing the difference in the contents so compared. These correlation surfaces are analyzed by a motion vector estimator 25 which derives motion vectors representing the motion of the content of respective blocks between the two input frames. The motion vectors are then supplied to a motion vector reducer 26 in which additional motion vectors are assigned to each block. The motion vectors are then supplied to a motion vector selector 27 which selects the best motion vector for each output pixel to be produced.Any irregularity in the selection of the motion vectors by the motion vector selector 27 is removed by a motion vector post-processor 28 from which the processed motion vectors are supplied to an interpolator 29.
Pairs of progressive scan frames from the time base corrector 22 are also supplied to the motion vector selector 27 which also receives an input from a temporal position generator 30. The temporal position generator 30 determines the appropriate temporal offset or interpolation ratio for each pixel of an output frame in dependence on the type of skew to be corrected. The temporal position generator receives an input from the system controller 15 indicating the type of skew to be corrected, a timing input (FLD SYNC) synchronized with the fields of the acquisition or display device, and a pixel clock input (PIX CLK) to trigger adjustment of the temporal offset supplied by the temporal position generator for different output pixels. The temporal offset from the temporal position generator 30 is also supplied to the interpolator 29 which receives frames from the time-base corrector 22 via a system delay compensator 31. The compensator 31 supplies the progressive scan frames from the time-base corrector 22 to the interpolator 29 after a time equal to the delay introduced by the operation of the motion processor 23.
The operation of the progressive scan converter 21, motion processor 23 and interpolator 29 is described in detail in GB2231228A and UK patent application no 9024836.0. The motion compensated interpolation process described in GB2231228A relates only to the processing of an input 60 fields/s 2:1 interlace scan format video signal to produce a 24 frames/s progressive scan format video signal.
Use of motion compensated interpolation to perform format conversions other than the specific example described in GB2231228A are disclosed in UK patent application no 9024836.0. In all these processes, the same temporal offset is applied to each pixel in any one output field or frame, ie the temporal offset between that output field or frame and the input frames from which it is derived. Thus, for all pixels in a given output field or frame, the ratio of the temporal or spatial displacements of the output pixel from the input pixels from which it is derived, ie the interpolation ratio, is the same. In the embodiment of the present invention described above, the interpolation ratio supplied by the temporal position generator 30 varies for different pixels in the same output field or frame to compensate for a skew mismatch between the image source and display.
Calculation of the temporal offset by the temporal position generator 30 will now be described, with reference to Figure 5, for the case where the apparatus is used to correct the skew associated with the image acquisition and display system shown in Figure 1, when the CCD camera is read out in field-frame mode (ie all accumulated charge is reset each field). Line a in Figure 5 indicates the temporal reference positions of a series of odd and even fields ("0" and "1" respectively). If the CCD camera has a 1/500s aperture time, then the field sampling periods are approximately as shown in line b. In the interval between the temporal reference positions of successive fields, for example 1/60th of a second, a CRT display will display n lines of pixels, each pixel displayed being temporally offset from the previous one by one temporal sample period.As shown in line c, the temporal position of the first pixel in the first line of each field of the display is aligned with the temporal reference position for the corresponding acquired field. However, the temporal position of each pixel after the first will be offset from the reference position by a corresponding number of output sample periods. Thus, as shown in line d, the temporal position of the pixel half-way along line n/2 of each output field will be mid-way between two temporal field references.
The temporal position of the last pixel in line n of each output field will be immediately before the following field reference position as shown in line e. Thus, the temporal position generator 30 increments the temporal offset by one output sample period for each pixel of an output field.
Briefly, the operation of the motion processor 23 for the above example is as follows. Pairs of temporally adjacent input frames, supplied by the time-base corrector 22, are compared to detect motion in areas of the image between the two frames. Motion vectors are assigned to each of the input frame pixels in dependence upon the detected motion of that pixel between the corresponding pair of input frames. Since one output field is to be produced for each input frame in this case, the pixel positions for each output field lie temporally somewhere between a pair of input frames.Motion vectors are assigned by the motion vector selector 27 to each pixel of an output field by testing input frame motion vectors with the output pixel located along each motion vector at its correct position depending on the value of the motion vector under test and the known temporal offset of the output pixel supplied by the position generator 30. Once the correct motion vector for each output pixel has been selected, the motion vectors are passed to the interpolator 29. For each output pixel, the interpolator 29 uses the associated motion vector and the temporal position of the output pixel relative to the two input frames to determine which part of the first frame should be combined with which part of the second frame, with appropriate weighting, to produce the correct output pixel value.One interpolated field is produced for each pair of input frames, the fields being alternately odd and even.
The output fields are then recorded on the VTR 13.
In the particular processing operation described above, the interpolation ratio changes progressively along each line of output pixels as well as with successive lines of output pixels. However, acceptable results may be achieved by neglecting the temporal offset between successive pixels in each line of the display and using the same interpolation ratio for all pixels in the same line of an output field. Figure 6 illustrates schematically the temporal relationship between input frames and output fields under this approximation.
In Figure 6, three input frames Fi and two output fields Fo are shown in their relative temporal positions on rectangular coordinate axes H, V, T, where H is horizontal pixel position, V is vertical pixel position, and T is time. The individual pixels Pi and PO of the input frames and output fields respectively are shown schematically. All the input pixels Pi of each progressive scan input frame Fi are at the same temporal position. Successive lines of output pixels PO in each output frame Fo are temporally offset from one another, but all pixels in each line are temporally aligned.The output pixels are interpolated between input pixels in the two adjacent input frames, the input pixels being located by the interpolator 29 using the motion vector assigned to each output pixel and the interpolation ratio supplied by the temporal position generator 30. This process can be understood from a consideration of Figure 8.
In Figure 8, Fo designates an output field of Figure 6, F p the temporally preceding progressive scan input frame and Fs the temporally succeeding progressive scan input frame. Po(h, v) is the value of the pixel at coordinates (h, v) in the output field Fos and M is the motion vector (mh, mv) assigned to that pixel. If %: > is the interpolation ratio for the output pixel, then the output pixel is formed from a pixel (or patch) in frame Fp, obtained by moving back along the motion vector a distance#M, and a pixel (or patch) in frame F5 obtained by moving forwards along the motion vector a distance# M.
Thus if Pp(h, v) is the value of a pixel at location (h, v) in frame Fp, and PS(h, v) is the value of a pixel at location (h, v) in frame Fs, then without weighting: Po(h, v) = 1/2Pp [(h, v) -#(m#, mv) ] + 1/2Ps [(h, v) f ffi (mh, mv) ] From Figure 8: = t and p = F - t F F where: t = temporal offset of Po(h, v) from F p: f = active output field period; and F = total output field period.
Thus, if L = number of lines in an output frame (04 h % L - 1), from Figure 8: t = v therefore = vf , and ss = 1 - vf f L - 1 (L - 1)F (L - 1)F Therefore: Po(h, v) = 1/2 Pp [(h, v) - v f (mh, mv) ] (L - 1)F + 1/2 Ps [(h, v) + (1 - vf ) vf (mh, my s - 1)F Therefore, if L = 1035, for line v = 0: Po(h, O) = 1/2 Pp(h, O) + 1/2 Ps [(h, O) + (mh, mv) ] and for line v = 1034:: Po(h, 1034) = 1/2 Pp [(h, 1034) - f(mh, mv) ] F + 1/2 P5 [(h, 1034) + (1 - f) (mh, mv) ] Thus, the temporal position generator 30 can determine the interpolation ratio from the line number of the output pixel, and this together with the appropriate motion vector (mh, mv) from the motion processor 23 enables the value of each output pixel to be determined by the interpolator 29.
As previously mentioned, improved results may be achieved by weighting the input frame pixel values from which the output pixel value is determined in dependence upon the likely accuracy in location of the correct input frame pixels to be used. In this case, Po(h, v) may be given by: Po(h, v) = (1 - vf ) P [(h, v) - vf (mh, mv) ] (L - 1)F v) p (L - 1)F v) + (1 - vf ) (#, #) ] (L - 1)F (L [ (h, v) - 1)F If the location of an input frame pixel or patch calculated using the above formulae is in fact "off screen", ie beyond the frame limits, the output pixel value may be determined from a single input frame only.
Figures 7 and 9 are equivalent diagrams to those of Figures 6 and 8 for the case where an image acquired by a raster scan acquisition device is processed for writing to film. Here, successive lines of the progressive scan input frames Fi are temporally offset from one another, the temporal offset between individual pixels Pi in a line being neglected. All pixels PO in each output field Fo are temporally aligned.
As before, the interpolator 29 uses the motion vector M assigned to a given output pixel Po(h, v) and the interpolation ratio from the temporal position generator 30 to determine the value of the output pixel from the values of the appropriate input frame pixels (or patches) according to: Po(h, v) = 1/2 Ps [(h, v) - D((mh, mv) ] + 1/2 Pp [(h, v) + jb (mh, mv) ] From Figure 9: oa = t1 and F = t2 t1 + t2 t1 + t2 where t1 = temporal offset of Po(h, v) from frame Fp; and t2 = temporal offset of Po(h, v) from Frame F5 If:F = total input frame period; f = active input frame period; and a and b are as shown in Figure 9: t, = a ; b = v ; and a + b = F t2 b f L - 1 Thus: 0E = a = a = F - b = 1 - vf a + b F F (L - 1)F and: j9 = b = b = vf a + b F (L - 1)F Therefore: Po(h, v) = 1/2 Pp [(h, v) - (1 - vf ) (mh, mv) ] (L - 1)F + 1/2 Ps [ (h, v) + vf vf1)F (#, mv) ] (L - 1)F or with weighting:: Po(h, v) = vf Pp [(h, v) - (1 - vf ) (mh, mv) ] (L -1)F (L - 1)F + (1 - vf ) P5 [(h, v) + vf (mh, mv) ] (L - 1)F (L - 1 F It will be appreciated that skew mismatch resulting from other factors to those specifically dealt with in the above examples may be corrected. For example, skew mismatch resulting from the shutter movement in a film projector when displaying images originally acquired by a CCD or tube camera may be corrected by accounting for the shutter movement in calculation of the temporal offset by the temporal position generator.
As previously explained, frame rate conversion, such as the processes described in UK patent application no 9024836.0 may be carried out during the motion compensated interpolation process as well as skew correction. In this case, the temporal position of each interpolated output pixel will depend on both the temporal offsets or interpolation ratios for individual pixels supplied by the temporal position generator 30, and the temporal offset between the output frame and the two progressive scan input frames from which it is formed.

Claims (15)

1. A method of processing an input digital video signal to compensate for a skew mismatch in the temporal characteristics of the image source and the intended display device, the method comprising producing an output video signal from the input video signal by motion compensated temporal interpolation such that at least some of the output pixels in each interpolated output field or frame are each derived from input pixels in a pair of input fields or frames, the input pixels being spatially displaced from the position of the output pixel by amounts dependent upon the skew mismatch of the image source and the display device.
2. A method as claimed in claim 1, wherein the ratio of the spatial displacements between the position of the output pixel and the said input pixels changes progressively with successive lines of pixels in each output field or frame.
3. A method as claimed in claim 1, wherein the ratio of the spatial displacements between the position of the output pixel and the said input pixels changes progressively along each line of pixels, and with successive lines of pixels, in each output field or frame.
4. A method as claimed in any preceding claim, for compensating for distortion resulting from display by a raster scan display device of an image acquired by a device which acquires all points in each field or frame of the image over substantially the same time interval.
5. A method as claimed in any one of claims 1 to 3, for compensating for distortion resulting from display by a raster scan display device of an image derived from a source which simulates idealised sampling.
6. A method as claimed in claim 4 or claim 5, wherein the said input pixels are spatially displaced from the position of the output pixel by amounts dependent upon the time interval between display of the raster line corresponding to the output pixel and the beginning of the corresponding field or frame of the display.
7. A method as claimed in claim 4 or claim 5, wherein the said input pixels are spatially displaced from the position of the output pixel by amounts dependent upon the time interval between display of the output pixel and the beginning of the corresponding field or frame of the display.
8. A method of processing an input digital video signal, in which pixels of output fields or frames are temporally interpolated between pairs of input fields or frames using motion compensated temporal interpolation, characterised in that at least some of the pixels of an output field or frame are temporally interpolated with a different interpolation ratio to other pixels of that output field or frame.
9. A method as claimed in claim 8, wherein the interpolation ratio changes progressively down the output field or frame.
10. A method as claimed in claim 8 or claim 9, wherein the interpolation ratio changes progressively along each line of pixels in the output field or frame.
11. A method as claimed in any preceding claim, wherein the value of each interpolated output pixel is a weighted combination of the values of the input pixels from which the output pixel is formed, the ratio of the weighting factors changing progressively with successive lines of the output field or frame.
12. A method as claimed in any one of the preceding claims, including producing a series of progressive scan frames, one from each of the fields of a video signal obtained from the image source, and utilising the progressive scan frames as the said input signal.
13. A method as claimed in any one of the preceding claims, wherein the output video signal corresponds to a different frame rate to the input video signal.
14. A method of processing a video signal substantially as hereinbefore described with reference to Figures 3 to 9 of the accompanying drawings.
15. Apparatus adapted to perform the method of any one of the preceding claims.
GB9203306A 1992-02-17 1992-02-17 Digital video signal processing Expired - Fee Related GB2264416B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB9203306A GB2264416B (en) 1992-02-17 1992-02-17 Digital video signal processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB9203306A GB2264416B (en) 1992-02-17 1992-02-17 Digital video signal processing

Publications (3)

Publication Number Publication Date
GB9203306D0 GB9203306D0 (en) 1992-04-01
GB2264416A true GB2264416A (en) 1993-08-25
GB2264416B GB2264416B (en) 1995-07-12

Family

ID=10710511

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9203306A Expired - Fee Related GB2264416B (en) 1992-02-17 1992-02-17 Digital video signal processing

Country Status (1)

Country Link
GB (1) GB2264416B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997004634A2 (en) * 1995-07-25 1997-02-13 Daewoo Electronics Co., Ltd. Method and apparatus for pre-compensating an asymmetrical picture in a projection system for displaying a picture
GB2343316A (en) * 1998-10-26 2000-05-03 Sony Uk Ltd Video interpolation
EP1641276A1 (en) * 2004-09-22 2006-03-29 Nikon Corporation Image processing apparatus, program, and method for performing preprocessing for movie reproduction of still images
WO2007067223A1 (en) * 2005-12-06 2007-06-14 Raytheon Company Image processing system with horizontal line registrations for improved imaging with scene motion
US8711239B2 (en) 2007-07-31 2014-04-29 Nikon Corporation Program recording medium, image processing apparatus, imaging apparatus, and image processing method

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997004634A2 (en) * 1995-07-25 1997-02-13 Daewoo Electronics Co., Ltd. Method and apparatus for pre-compensating an asymmetrical picture in a projection system for displaying a picture
WO1997004634A3 (en) * 1995-07-25 1997-03-06 Daewoo Electronics Co Ltd Method and apparatus for pre-compensating an asymmetrical picture in a projection system for displaying a picture
AU706332B2 (en) * 1995-07-25 1999-06-17 Daewoo Electronics Co., Ltd. Method and apparatus for pre-compensating an asymmetrical picture in a projection system for displaying a picture
GB2343316A (en) * 1998-10-26 2000-05-03 Sony Uk Ltd Video interpolation
GB2343316B (en) * 1998-10-26 2003-02-26 Sony Uk Ltd Video processing
EP1641276A1 (en) * 2004-09-22 2006-03-29 Nikon Corporation Image processing apparatus, program, and method for performing preprocessing for movie reproduction of still images
US7764310B2 (en) 2004-09-22 2010-07-27 Nikon Corporation Image processing apparatus, program and method for performing preprocessing for movie reproduction of still images
US8111297B2 (en) 2004-09-22 2012-02-07 Nikon Corporation Image processing apparatus, program, and method for performing preprocessing for movie reproduction of still images
US8289411B2 (en) 2004-09-22 2012-10-16 Nikon Corporation Image processing apparatus, program, and method for performing preprocessing for movie reproduction of still images
WO2007067223A1 (en) * 2005-12-06 2007-06-14 Raytheon Company Image processing system with horizontal line registrations for improved imaging with scene motion
US7697073B2 (en) 2005-12-06 2010-04-13 Raytheon Company Image processing system with horizontal line registration for improved imaging with scene motion
US8711239B2 (en) 2007-07-31 2014-04-29 Nikon Corporation Program recording medium, image processing apparatus, imaging apparatus, and image processing method

Also Published As

Publication number Publication date
GB2264416B (en) 1995-07-12
GB9203306D0 (en) 1992-04-01

Similar Documents

Publication Publication Date Title
KR950006774B1 (en) Motion compensation predictive method
US5642170A (en) Method and apparatus for motion compensated interpolation of intermediate fields or frames
US5610662A (en) Method and apparatus for reducing conversion artifacts
US5357287A (en) Method of and apparatus for motion estimation of video data
US8355442B2 (en) Method and system for automatically turning off motion compensation when motion vectors are inaccurate
US6947094B2 (en) Image signal processing apparatus and method
US4845557A (en) Field motion suppression in interlaced video displays
US5343241A (en) Digital video signal with increased motion blur characteristic
GB2291758A (en) Motion vector post-processor
US20050243933A1 (en) Reverse film mode extrapolation
CN101146203A (en) Scan conversion apparatus
US3423526A (en) Narrow-band television
KR100976718B1 (en) Method and apparatus for field rate up-conversion
EP1424851B1 (en) Motion detection apparatus and method
US7425990B2 (en) Motion correction device and method
EP1460847B1 (en) Image signal processing apparatus and processing method
EP0561561B1 (en) Digital video signal processing
EP0450889B1 (en) Image shift correction for video cameras
US5805207A (en) Moving image reproduction system providing compensation for cinematographic artifacts
KR970002696B1 (en) Television standard converter
US5430489A (en) Video to film conversion
GB2264416A (en) Motion compensated video signal processing
US7505080B2 (en) Motion compensation deinterlacer protection
EP0746154B1 (en) A subpicture signal vertical compression circuit
USRE39279E1 (en) Method for determining motion compensation

Legal Events

Date Code Title Description
730A Proceeding under section 30 patents act 1977
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20110217