GB2165417A - Measurement and correction of film unsteadiness in a video signal - Google Patents

Measurement and correction of film unsteadiness in a video signal Download PDF

Info

Publication number
GB2165417A
GB2165417A GB08522347A GB8522347A GB2165417A GB 2165417 A GB2165417 A GB 2165417A GB 08522347 A GB08522347 A GB 08522347A GB 8522347 A GB8522347 A GB 8522347A GB 2165417 A GB2165417 A GB 2165417A
Authority
GB
United Kingdom
Prior art keywords
video signal
displacement
unsteadiness
signals
interpolation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB08522347A
Other versions
GB8522347D0 (en
GB2165417B (en
Inventor
Karina Lyn Minakovic
Nicholas Edward Tanton
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
British Broadcasting Corp
Original Assignee
British Broadcasting Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GB848422716A external-priority patent/GB8422716D0/en
Application filed by British Broadcasting Corp filed Critical British Broadcasting Corp
Priority to GB08522347A priority Critical patent/GB2165417B/en
Publication of GB8522347D0 publication Critical patent/GB8522347D0/en
Publication of GB2165417A publication Critical patent/GB2165417A/en
Application granted granted Critical
Publication of GB2165417B publication Critical patent/GB2165417B/en
Expired legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • H04N5/145Movement estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/527Global motion vector estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6811Motion detection based on the image signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/253Picture signal generating by scanning motion picture films or slide opaques, e.g. for telecine

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Systems (AREA)

Abstract

Unsteadiness in a video signal derived from cine film is corrected by using a motion vector measurement circuit 14 to derive from the input video signal displacement signals which represent the horizontal and vertical displacement of successive frames. A two-dimensional interpolator 18 forms an output video signal by interpolation from the input video signal under the control of a control circuit 16 in dependence upon the displacement signals. An exception detector 22 can detect pans, zooms and shot changes and inhibit the interpolation control when such occur. The motion vector measurement is preferably by determining the differences between adjacent pixels on the same line, corresponding pixels on adjacent lines, and corresponding pixels on adjacent frames, and combining the resultants in accordance with expressions (3) and (4) herein, which are based on a first-order-truncated Taylor expansion. <IMAGE>

Description

SPECIFICATION Measurement and Correction of Film Unsteadiness in a Video Signal A significant proportion of film shown on television suffers from image unsteadiness. This impairment is particularly associated with 16 mm film and results from frame-to-frame differences in the position of the optical image on the transmission print. When scanned by the telecine, these frame-to-frame positional differences are usually manifest as a horizontal weaving motion (with a typical period of about 12 seconds) or as a vertical hopping motion.
Various potential sources of this unsteadiness have been identified and discussed, see in particular two papers presented at BKSTS Film '71 respectively by Wright, D. T. "16 mm Film: Image steadiness in television presentation Part 1: The measurement of unsteadiness and prediction of subjective impairment", and Sanders, J. R. "16 mm Film: Image steadiness in television presentation Part 2: Causes of unsteadiness", (BBC Research Department Reports Nos. 1971/28 and 1971/29). The sources of unsteadiness include film positioning inaccuracy in the camera, perforation inaccuracy in the film stock, printer negative/positive positioning inaccuracy, and the telecine film transport-of these the high speed optical printer contributes most to image unsteadiness.
In principle, individual cameras, printers and telecines can be checked and adjusted to reduce their individual contributions to image unsteadiness; similarly it would be feasible to modify cameras so that they added a test pattern to the exposed image out-of-shot-a pattern which would be subject to the same treatment as the required image and could be used to measure and correct for unsteadiness when the film is scanned in the telecine. To perform these adjustments and modifications universally is wholly impractical-the broadcaster does not in general have complete control on the sources and high-speed printing equipment with which the transmission print of any film has been exposed.
A method of compensating for film image unsteadiness using the video signal from the telecine is therefore highly desirable.
The invention is defined in the claims below to which reference should now be made.
The invention will be described in more detail by way of example, with reference to the drawings, in which: Figure lisa block circuit diagram of a basic unsteadiness compensator embodying the invention; Figure 2 is a block diagram of a modified unsteadiness compensator for use with RGB signals; Figure 3 is a diagram showing how a zoom is characterised by motion being symmetrical about the optical centre of the display; Figure 4 is a block circuit diagram for measurement of the element to element difference EDIF; Figure 5 is a vertical position versus time diagram illustrating the lines used in measurement of the line to line difference LDIF; Figure 6 is a block diagram of a circuit giving signals required to measure EDIF, FDIF and LDIF; Figure 7 is a block circuit diagram based on Figure 6 including second order terms;; Figure 8 is a block diagram of a circuit used to form the products; and Figure 9 is a block diagram of a circuit used to accumulate the product term.
Figure 1 shows an unsteadiness compensator 10 with an input 12. A motion vector measurement circuit 14 is connected to the input and provides an output to an interpolation control circuit 16, which controls a two-dimensional (vertical/horizontal) interpolator 18. A compensating delay 20 provides an equivalent delay in the signal path to that of circuits 14 and 16. Also connected to the input is an exception detector 22 which provides a second over-riding input to the interpolation control circuit 16. The output 24 of the interpolator 18 constitutes the circuit output.
In Figure 2 the signal path is split into separate R, G and B paths for the red, green and blue signals.
Separate interpolators are controlled in parallel. A combining circuit 26 forms a luminance signal Y for the control circuitry.
Central to a successful method of unsteadiness compensation is the measurement of the film image position on a frame-by-frame basis. Where an optical image is still available (e.g. in the optical printer or a modified telecine), a measurement of the frame-to-frame motion could be performed with respect to external references (sprocket-hole, frame bar, test pattern exposed out-of-shot in the camera etc.) However, once the optical image has been scanned, the image information and reference (in this case syncs) are locked together, sampled by the scanning process. A measure of the image unsteadiness can now only be obtained from the scanned signal itself and this measurement must be performed in the presence of moving picture detail and of random fluctuations caused by film grain, film surface defects (dirt, scratches etc.) and electrical noise.The measurement must also be made with respect to an assumed reference--for example, with respect to the mean position over a representative number of frames. The derived motion vectors can then be used to control the 2-dimensional interpolator, thus enabling each frame to be repositioned horizontally and vertically with sufficient accuracy to significantly reduce the visibility of unsteadiness.
As noted above, the unsteadiness measurement must be made in the presence of frame-to4rame differences caused by real motion within the scene and by random fluctuations. Apart from global scene movement such as pans and zooms, these spurious signals are likely to be spatially localised whereas the unsteadiness represents a gross translation of the entire scene. By performing unsteadiness measurement on the entire scanned image the effects of spatially localised 'noise' signals are considerably diluted.
Furthermore, by suitably dividing the measurement area and analysing the results of each measurement, the global scenic changes can be categorised and allowance made for them, e.g. the interpolator might be disabled when pans and zooms, or shot changes, are detected.
For example, a zoom detector would be possible by performing motion measurement on the four quarters of the image area, as indicated in Figure 3, and comparing the estimated motion vectors for the four quarters. When the movement is symmetrical about the optical picture centre, as shown by the arrows, zooming is assumed to have taken place. To the extent that the arrows all point in the same direction, a pan may be presumed to have occurred. Detection can be improved by looking for systematic displacements between successive frames. Similarly, itwill be necessary to relaxthe interpolator when a shot-change has been detected and a known shot-change detector may be used for this purpose.
The use of an exception detector 22 sensitive to image movement caused other than by unsteadiness enables a more sophistrcated motion detectorto be used with less danger of serious errors being introduced.
Methods of detecting the unsteadiness will now be described.
Several methods for motion estimation might be considered, for instance: (i) Taylor expansion-using a truncated Taylor expansion of the frame difference, based on a paper by Netravali, A. N. and Robbins, J. D. "Motion-compensated television coding: Part 1." B.S.T.J., Vol. 58, No. 3, March 1979.
(ii) Phase correlation-a phase correlation method, based on a paper by Pearson, J J., Hines, D. C., Goldsman, S. and Kuglin, C. D. "Video rate Image correlation processor", S.P.I.E. Vol. 119 Application of Digitai Image Processing (IOCC 1977).
(iii) Minimum frame difference-determination of the shift required in order to minimise the difference between successive frames.
With the Taylor expansion method, truncating the Taylor series after the first order terms gives expressions for the horizontal and vertical displacements as: (2:EDIF. LDIF) (LDlF. FDIF)-(EEDIF. FDlF) (LDlF2) #x = # ... (3) (#LDIF) . (#EDIF) - (#EDIF LDIF) (#EDIF. LDlF) (EDlF. FDIF)-(ELDIF. FDlF) (EDlF2) Ay= ... (4) (zEDIF2} EDIF2S(EEDlF. LDIF)2 where FDIF is the frame difference (i.e. the difference between corresponding pixels on adjacent frames), and EDIF, LDIF are linear approximations to the first derivatives with respect to x andy respectively (x defines the horizontal position, positive to the right and y defines the vertical position, positive downwards), i.e. the differences between adjacent pixels on the same line and on adjacent lines respectively.The derivation of these equations is given in Appendix A below.
The second, phase correlation, method finds an estimate of the motion by determining the point at which the correlation of the phase information of successive frames reaches a maximum. A more detailed explanation of the underlying mathematics is given in Appendix B.
Minimising the frame difference appears, at first, to be far simpler than phase correlation. However, the two methods do have a fundamental similarity as shown in Appendix C.
It may seem that either method (ii) or (iii) would be favoured since both have the potential to measure even large displacements, unlike (i) which is valid only for sub-pixel shifts. But the ability to detect large shifts is of secondary importance when also considering the high cost of the hardware; when using the phase correlation method or minimising the frame difference interpolators are required to enable measurement of non-integral pixel shifts. Furthermore, large frame-to4rame displacements will usually be intentional e.g. special effects simulating earthquakes and explosions, fast pans and shot changes. Such motion does not require correction. Fast motion due to a pan could be identified by taking the displacement estimates of several successive frames and noting the systematic nature of the shifts.Earthquake simulations, however, will be essentially random and could possibly appear as extremely unsteady film.
Therefore, a system which fails to detect large displacements has the advantage that it reduces the complexity of any motion analyser. Accordingly, the Taylor expansion-based method is preferred.
Finally, it should be noted that Appendix A also derives expressions to estimate motion by taking the Taylor expansion to second order terms. However, computer simulations do not suggest that any overall increase in accuracy would be achieved although the arithmetic is significantly more complex.
The method mf movement estimation will now be described with reference to the circuit diagram of Figure 4. As given in the Appendix EDIF (x,y)=2 (l(x+1, y)-l(x-1, y)). Neglecting factors of 2 (which can be accounted for in the later calculations) EDIF is found by delaying the video signal by two pixels duration and subtracting from the undelayed signal. Thus, as shown in Figure 4, the input signal at terminal 12 are applied to two series-connected pixel (picture element) delays 32 and 34. The output of delay 34 is applied to the inverting input of a subtractor 36 which receives the input video at its non-inverting input, and provides a measure of EDIF at its output 38.
Similarly, LDIF (x, y)=21 (I(x, y+1)-l (x, y-1)). If the picture were scanned sequentially then LDIF could be obtained simply by replacing the element delays in the circuit 30 shown in Figure 4 by line delays.
However, in the case of interlaced scanning, the lines used in determining LDIF do not occur in the same field as the line about which LDIF is being measured. It should be remembered that, since the pictures are from film, the information on field 1 and field 2 does represent the same instant in time.
Consider the two cases of LDIF for lines in field 1 and for lines in field 2 separately. Numbering the lines as in PAL system I, for pixels at line L, in field 1: FDIF (x, L1)=l (x, L1)-l (x, L1-625) LDIF (x, L1)=i (x, L,+313)-I (x, L+312) and for pixels at line La in field 2: FDIF (x, L2)=l (x, L2)-l (x, L2-625) LDIF (x, L2)=l (x, L2-312)-l (x, L2-313).
Figure 5 shows that the last pixel required to find FDIF and LDIF is always in field 2 regardless of the field in which the pixel occurs. Hence, when finding EFDIF LDIF a delay of 1 field must be introduced into at least one set of values. Furthermore, since for field 1 values the lines needed for measurement of LDIF occur in field 2 (i.e. after the corresponding field 1 lines) whereas forfield 2 values they occur in field 1 (i.e. before the corresponding field 2 lines), it is necessary to switch between the two cases. Thus a signal which indicates whether the present field is odd or even is required. A block diagram of a circuit giving the required signal is shown in Figure 6.
Figure 6 thus shows a circuit 40 forming the first part of the motion vector measurement circuit 14 of Figure 1 and which is connected to the input 12. A series of delay elements 42a to 42i are used providing the following delay times: 42a one line 42b 311 lines 42c one line less one pixel 42d one pixel 42e one pixel 42f one line less one pixel 429 311 lines 42h one line 42i 312 lines Some of the outputs, namely the input and the outputs of delays 42a, 42g and 42h are applied to a selector 44 which receives a signal at an input 46 indicating whether an odd or even field is involved.Using the terminology adopted above, for field 1 the input and the output of delay 42a are used for L1 and L2 signals for use as LDIF, and for field 2 the outputs of delays 429 and 42h are employed instead. Corresponding signals El and E2 for EDIF are taken from delays 42e and 42c, and signals F1 and F2 for FDIF from delays 42i and 42d.
Since the pixels required to estimate the second derivatives occur in the same field as the corresponding pixel for which the values are required, (see Appendix A) making provision for the measurement of second order terms does not significantly increase the circuit complexity. Therefore, the means for measuring these terms can easily be included. A block diagram of the resulting circuit is shown in Figure 7. In practice, however, little advantage was obtained by the extra complexity. Figure 7 is similar to Figure 6 with essentially the only difference between the addition of some further delays in the delay chain 42 so as to make more signals available simultaneously. The delay values and signals are noted on the Figure which is not therefore described in more detail.
The signals required for EDIF, LDIF and FDIF are then passed to two identical circuits 50, a block diagram of which is shown in Figure 8, which find the difference signals and form the products EDIF. FDIF, EDIF? and LDIF. FDIF, LDIF2. Twos-complement code is used in performing the differencing in subtractors 52, 54. The results are then converted using gates 56 and adders 58 to sign-and-magnitude representation for multiplication since in this way 8-bit multipliers may be used. The multiplier 60 produces EDIF. FDIF and LDIF . FDIF while the terms EDIF2 and LDIF2 are obtained from a PROM 62 addressed by the corresponding difference signals. These product terms are then converted back to twos-complement code using gates 64 and 66 and an adder 68 and fed to a circuit 70 which accumulates the products as shown in Figure 9.This includes an input buffer register 72 for receiving the values to be summed, an accumulator loop comprising an adder register 74 (also having a reset input) and a delay 76, and an output buffer register 78 for holding the accumulated output at the end of each frame. Since the first and last pixel of active line, first and last line of active picture and pixels which occur during blanking do not have useful values of EDIF and LDIF, a control signal (HOLD) is sent to the circuit to ensure that meaningless values are ignored. An additional signal TAKE is used to indicate the end of an active picture, at which summation is complete, and a signal CLEAR to clear the registers before the next active frame.
The signals #EDIF, #LDIF, #EDIF. FDIF and M:LDIF . FDIF from circuits as in Figure 9 are then passed to a computing device which calculates values for Ax and Ay in accordance with the equations (3) and (4) given above. The resultant constitutes the output of the motion vector measurement circuit 14 of Figure 1.
Providing it is not inhibited by the exception detector 22, the control circuit 16 then instructs the interpolator 18 to make the necessary shifts to compensate forthe detected displacements.
Appendices-Motion Estimation Algorithms Appendix A-Taylor Expansion Consider a pixel at position (x, y), intensity l(x, y).
Define the frame difference between successive frames as FDIF (x, y)=l(x, y, t)-l(x, y, t-T) (1) where # is the interval between successive frames, and where lines are numbered sequentially.
Suppose a pure translation (#x, #y) has occurred. The part of the picture represented by the pixel at position (x, y), time t-T, has moved to (x+Ax, y+Ay) by time t.
Hence l(x, y, t-T)=l(x+Ex, y+Ay, t) (2) Substituting (2) in (1) gives FDIF (x, y)=l(x, y, t) - | (x + #x, y+Ay, t) For small Ax, Ay Taylor expansion gives
Al. First Order Expansion Considering first order terms only then gives:
Ax and Ay may now be found using linear regression:: For the general expression Z,=ss1uj+ss2v,+s, where is the error term #(#1) = #(Z1-ss1u1-ss2v1) values of ss1 a and t32 to minmise the mean square error are found at
Linear approximations of the first derivatives as
then (#EDIF.LDIF).(#LDIF.FDIF)-(#EDIF.FDIF).(#LDIF) #x = # ... (3) (#LDIF).(#EDIF)-(#EDIF.LDIF) (#EDIF.LDIF).(#EDIF.FDIF)-(#LDIF.FDIF).(#EDIF) #y = # ... (4) (#LDIF).(#EDIF)-(#EDIF.LDIF) A2.Second Order Expansion If second order terms are also used then the expression for FDIF becomes:
Writing this in the general form: Zi=ss1a1 + ss2b1 + ss1c1 + ss2d1 + ss1ss2fi + # Thus #(#1) = #(Z1-ss1a1-ss2b1-ss1c1-ss1d1-ss1ss2f1) As before, minimise
which gives #(a1+2ss1c1+ss2f1)(Z1-ss1a1-ss1c1-ss2b1-ss2d1-ss1ss2f1)=0 and #(b1+2ss2d1+ss1f1)(Z1-ss1d1-ss2b1-ss2d1-ss1ss2f1)=0 In practice, it is found that some of these terms may be neglected and the equations reduce to:
It can be seen that this method results in two, simultaneous cubics. For simplicity, an approximate solution can be found by substituting estimates of ss2 and ss1 (from first order considerations) into (5) and (6) respectively.This leaves two independent cubics which can then be solved analytically. Since cubics have three roots, the one nearest to the initial estimates is chosen.
Approximations to the first derivatives can be found as before. The second order derivatives are estimated in a similar manner i.e.:
=1/4[l(x+Z, y)-2l(x,y)+l(x-2, y)] similarlv
=1/4[l(x+1, y+1)+l(x-1, y-1)-l(x-1, y+1)-l(x+1, y-1)] Appendix Phase Correlation Method Let the two sampled pictures being compared be represented by g,(x,y) and g2(x,y) with two dimensional discrete Fouriertransform of G,(m,n) and G2(m,n) respectively.
Assuming that pure translation (dx, dy) has occurred then: G2=#-1 (m,dx+n,dy)G2 and | G1G2 * | = | G1 | = G1G1 Thus
But ej# is the transform of a delta function. Therefore, computing G1G2 G1G2*| and finding the inverse transform will give a function with a maximum at (dx, dy). Of course, since g1(*)g2 # G1G2*, this is equivalent to finding the cross correlation of the two pictures and then determining where the maximum occurs. A delta function will not result in practice unless noiseless, pure cyclically translated pictures are used.
It will be noted that using a low pass filter on the inverse transform will reduce spurious peaks due to noise.
Appendix C-Minimum Frame Difference In this method, the amount by which a picture is displaced from the reference picture is deduced by measuring the required shift of the reference in order to minimise E : FDIF or E FDIF 2 As in method (ii), let the pictures be represented by g1 and g2. Consider the difference between pixel i in picture 1 and pixel i+k in picture 2.
E[((g1)1-(g2)i.k))] = E[((g1)i)]-2E(g1)1.(g2)1.k] E[((g1)1)2j and E[((g2)i.k)2] do not vary with k.
Therefore
Thus E[((g),-(g2),+k)2] is at a minimum when E((g,)j . (g2)1+k) is at a maximum.
However, the cross correlation of (g1)1 and (g,)l,, is Ck=E,((91), . (g2)i+k) where E,=the expectation value.
Thus, finding the value of k which minimises E[((g)i-(g1),+k)2] is equivalent to finding the value of k for which Ck(=E((91)5 . (g2)l+k)) is a maximum.

Claims (9)

1. A method of correcting unsteadiness in a video signal generated from cinematographic film, comprising deriving from the input video signal displacement signals representing the vector displacement of successive frames, and forming an output video signal from the input video signal by interpolation under the control of the displacement signals.
2. A method according to claim 1, including detecting in the input video signal global changes in the picture content, and inhibiting the interpolation control in response to the detection of any such changes.
3. A method according to claim 1 or 2, in which the displacement signals are derived by determining differences between adjacent pixels on the same line, corresponding pixels on adjacent lines, and corresponding pixels on adjacent frames, and combining the resultants in accordance with a formula based on a truncated Taylor expansion.
4. A method according to claim 3, wherein the formula comprises expressions (3) and (4) as herein defined.
5. Apparatus for use in the method of claim 1, comprising an input for receiving a video signal generated from cinematographic film, displacement signal deriving means for deriving from the input video signal displacement signals representing the vector displacement of successive frames, and interpolation means for forming an output video signal from the input video signal by interpolation under the control of the displacement means.
6. Apparatus according to claim 5, including exception detector means for detecting in the input video signal global changes in the signal content and for inhibiting the control of the interpolation means in response to the detection of any such changes.
7. Apparatus for measuring unsteadiness in a video signal generated from cinematographic film, comprising an input for receiving a video signal generated from cinematographic film, means for deriving signals EDIF, LDIF and FDIF representing respectively the differences between adjacent pixels on the same line, corresponding pixels on adjacent lines, and corresponding pixels on adjacent frames, and means for combining these signals in accordance with a formula bared on a truncated Taylor expansion to provide signals representative of the image displacement.
8. Apparatus according to claim 7, wherein the formula comprises expressions (3) and (4) as herein defined.
9. Apparatus for measuring unsteadiness in a video signal generated from cinematographic film, substantially as herein described with reference to the drawings.
GB08522347A 1984-09-07 1985-09-09 Measurement and correction of film unsteadiness in a video signal Expired GB2165417B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB08522347A GB2165417B (en) 1984-09-07 1985-09-09 Measurement and correction of film unsteadiness in a video signal

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB848422716A GB8422716D0 (en) 1984-09-07 1984-09-07 Measurement and correction of film unsteadiness
GB08522347A GB2165417B (en) 1984-09-07 1985-09-09 Measurement and correction of film unsteadiness in a video signal

Publications (3)

Publication Number Publication Date
GB8522347D0 GB8522347D0 (en) 1985-10-16
GB2165417A true GB2165417A (en) 1986-04-09
GB2165417B GB2165417B (en) 1988-02-10

Family

ID=26288197

Family Applications (1)

Application Number Title Priority Date Filing Date
GB08522347A Expired GB2165417B (en) 1984-09-07 1985-09-09 Measurement and correction of film unsteadiness in a video signal

Country Status (1)

Country Link
GB (1) GB2165417B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0294282A1 (en) * 1987-06-04 1988-12-07 Thomson Grand Public Method for the temporal interpolation of pictures, and apparatus for carrying out this method
DE3736789A1 (en) * 1987-10-30 1989-05-11 Broadcast Television Syst METHOD FOR AUTOMATICALLY CORRECTING IMAGE ERRORS IN FILM SCANNING
EP0343728A1 (en) * 1988-05-26 1989-11-29 Koninklijke Philips Electronics N.V. Method of and arrangement for motion detection in an interlaced television picture obtained after film-to-television conversion
WO1990000334A1 (en) * 1988-07-01 1990-01-11 Plessey Overseas Limited Improvements in or relating to image stabilisation
US4903131A (en) * 1987-10-30 1990-02-20 Bts Broadcast Television Systems Gmbh Method for the automatic correction of errors in image registration during film scanning
EP0454481A2 (en) * 1990-04-27 1991-10-30 Canon Kabushiki Kaisha Movement vector detection device
GB2255871A (en) * 1991-08-16 1992-11-18 Rank Cintel Ltd Telecine with interpolated control parameters
US5189518A (en) * 1989-10-17 1993-02-23 Mitsubishi Denki Kabushiki Kaisha Image blur correcting apparatus
US5214751A (en) * 1987-06-04 1993-05-25 Thomson Grand Public Method for the temporal interpolation of images and device for implementing this method
EP0649256A2 (en) * 1993-10-19 1995-04-19 Canon Kabushiki Kaisha Motion compensation of a reproduced image signal
US5475423A (en) * 1992-12-10 1995-12-12 U.S. Philips Corporation Film scanner which doubly scans to correct for film speed and position errors
US5619258A (en) * 1992-02-28 1997-04-08 Rank Cintel Limited Image stability in telecines
US5651087A (en) * 1994-10-04 1997-07-22 Sony Corporation Decoder for decoding still pictures data and data on which data length of still picture data is recorded reproducing apparatus for reproducing recording medium and reproducing method thereof
GB2289184B (en) * 1994-04-21 1998-10-28 Pandora Int Ltd Telecine systems
US6118478A (en) * 1994-04-21 2000-09-12 Pandora International Limited Telecine systems for high definition use
DE19536691B4 (en) * 1995-09-30 2008-04-24 Bts Holding International B.V. Method and device for correcting image frame errors in television filming

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2139039A (en) * 1983-03-11 1984-10-31 British Broadcasting Corp Electronically detecting the presence of film dirt
GB2139037A (en) * 1983-03-11 1984-10-31 British Broadcasting Corp Measuring the amount of unsteadiness in video signals derived from cine film

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2139039A (en) * 1983-03-11 1984-10-31 British Broadcasting Corp Electronically detecting the presence of film dirt
GB2139037A (en) * 1983-03-11 1984-10-31 British Broadcasting Corp Measuring the amount of unsteadiness in video signals derived from cine film

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0294282A1 (en) * 1987-06-04 1988-12-07 Thomson Grand Public Method for the temporal interpolation of pictures, and apparatus for carrying out this method
FR2616248A1 (en) * 1987-06-04 1988-12-09 Thomson Grand Public TEMPORAL INTERPOLATION METHOD OF IMAGES AND DEVICE FOR IMPLEMENTING SAID METHOD
WO1988010046A1 (en) * 1987-06-04 1988-12-15 Thomson Grand Public Process and device for temporal interpolation of images
US5214751A (en) * 1987-06-04 1993-05-25 Thomson Grand Public Method for the temporal interpolation of images and device for implementing this method
US4903131A (en) * 1987-10-30 1990-02-20 Bts Broadcast Television Systems Gmbh Method for the automatic correction of errors in image registration during film scanning
DE3736789A1 (en) * 1987-10-30 1989-05-11 Broadcast Television Syst METHOD FOR AUTOMATICALLY CORRECTING IMAGE ERRORS IN FILM SCANNING
US4875102A (en) * 1987-10-30 1989-10-17 Bts Broadcast Television Systems Gmbh Automatic correcting of picture unsteadiness in television film scanning
US4933759A (en) * 1988-05-26 1990-06-12 U.S. Philips Corporation Method of and arrangement for motion detection in an interlaced television picture obtained after film-to-television conversion
EP0343728A1 (en) * 1988-05-26 1989-11-29 Koninklijke Philips Electronics N.V. Method of and arrangement for motion detection in an interlaced television picture obtained after film-to-television conversion
WO1990000334A1 (en) * 1988-07-01 1990-01-11 Plessey Overseas Limited Improvements in or relating to image stabilisation
US5053876A (en) * 1988-07-01 1991-10-01 Roke Manor Research Limited Image stabilization
US5450126A (en) * 1989-10-17 1995-09-12 Mitsubishi Denki Kabushiki Kaisha Image blur correcting apparatus
GB2239575B (en) * 1989-10-17 1994-07-27 Mitsubishi Electric Corp Motion vector detecting apparatus and image blur correcting apparatus, and video camera including such apparatus
US5189518A (en) * 1989-10-17 1993-02-23 Mitsubishi Denki Kabushiki Kaisha Image blur correcting apparatus
US5296925A (en) * 1990-04-27 1994-03-22 Canon Kabushiki Kaisha Movement vector detection device
EP0454481A3 (en) * 1990-04-27 1992-08-05 Canon Kabushiki Kaisha Movement vector detection device
EP0454481A2 (en) * 1990-04-27 1991-10-30 Canon Kabushiki Kaisha Movement vector detection device
GB2255871B (en) * 1991-08-16 1994-12-07 Rank Cintel Ltd Improvements in telecines
GB2255871A (en) * 1991-08-16 1992-11-18 Rank Cintel Ltd Telecine with interpolated control parameters
US5619258A (en) * 1992-02-28 1997-04-08 Rank Cintel Limited Image stability in telecines
US5475423A (en) * 1992-12-10 1995-12-12 U.S. Philips Corporation Film scanner which doubly scans to correct for film speed and position errors
EP0649256A2 (en) * 1993-10-19 1995-04-19 Canon Kabushiki Kaisha Motion compensation of a reproduced image signal
EP0649256A3 (en) * 1993-10-19 1998-02-04 Canon Kabushiki Kaisha Motion compensation of a reproduced image signal
US6049354A (en) * 1993-10-19 2000-04-11 Canon Kabushiki Kaisha Image shake-correcting system with selective image-shake correction
GB2289184B (en) * 1994-04-21 1998-10-28 Pandora Int Ltd Telecine systems
US6118478A (en) * 1994-04-21 2000-09-12 Pandora International Limited Telecine systems for high definition use
US5651087A (en) * 1994-10-04 1997-07-22 Sony Corporation Decoder for decoding still pictures data and data on which data length of still picture data is recorded reproducing apparatus for reproducing recording medium and reproducing method thereof
DE19536691B4 (en) * 1995-09-30 2008-04-24 Bts Holding International B.V. Method and device for correcting image frame errors in television filming

Also Published As

Publication number Publication date
GB8522347D0 (en) 1985-10-16
GB2165417B (en) 1988-02-10

Similar Documents

Publication Publication Date Title
GB2165417A (en) Measurement and correction of film unsteadiness in a video signal
KR920001006B1 (en) Tv system conversion apparatus
JP3518689B2 (en) Derivation of studio camera position and movement from camera images
JP3226539B2 (en) Video image processing
EP0541389B1 (en) Method for predicting move compensation
EP0180446B1 (en) Methods of detecting motion of television images
JP4053422B2 (en) Method for reducing the influence of halo in interpolation by motion compensation
KR100330797B1 (en) Motion Estimation Method and Apparatus Using Block Matching
JPH0614305A (en) Method for introducing motion vector expressing movenent between fields or frames of image signals and image-method converting device using method thereof
US5357287A (en) Method of and apparatus for motion estimation of video data
KR20050061556A (en) Image processing unit with fall-back
JPH0721946B2 (en) Error correction method of digital television signal
US8189054B2 (en) Motion estimation method, device, and system for image processing
KR20070088836A (en) Image stabilizer and system having the same and method thereof
US5943090A (en) Method and arrangement for correcting picture steadiness errors in telecine scanning
GB2187913A (en) Measurement of film unsteadiness in a video signal
KR100857731B1 (en) Facilitating motion estimation
EP0632915B1 (en) A machine method for compensating for non-linear picture transformations, e.g. zoom and pan, in a video image motion compensation system
KR970010043B1 (en) Motion victor processing in digital television images
KR20040046360A (en) Motion detection apparatus and method
EP0659021B1 (en) Detection of global translations between images
JPH08315151A (en) Method and circuit arrangement for undersampling in case of movement evaluation
Thomas Distorting the time axis: motion-compensated image processing in the studio
EP0659022A2 (en) Detection of global translations between images
JP2600520B2 (en) Image motion compensation device

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20030909