GB2187913A - Measurement of film unsteadiness in a video signal - Google Patents

Measurement of film unsteadiness in a video signal Download PDF

Info

Publication number
GB2187913A
GB2187913A GB08708047A GB8708047A GB2187913A GB 2187913 A GB2187913 A GB 2187913A GB 08708047 A GB08708047 A GB 08708047A GB 8708047 A GB8708047 A GB 8708047A GB 2187913 A GB2187913 A GB 2187913A
Authority
GB
United Kingdom
Prior art keywords
video signal
unsteadiness
measurement
film
signals
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB08708047A
Other versions
GB2187913B (en
GB8708047D0 (en
Inventor
Karina Lyn Minakovic
Nicholas Edward Tanton
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
British Broadcasting Corp
Original Assignee
British Broadcasting Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GB848422716A external-priority patent/GB8422716D0/en
Application filed by British Broadcasting Corp filed Critical British Broadcasting Corp
Priority to GB08708047A priority Critical patent/GB2187913B/en
Publication of GB8708047D0 publication Critical patent/GB8708047D0/en
Publication of GB2187913A publication Critical patent/GB2187913A/en
Application granted granted Critical
Publication of GB2187913B publication Critical patent/GB2187913B/en
Expired legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • H04N5/145Movement estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/527Global motion vector estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6811Motion detection based on the image signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/253Picture signal generating by scanning motion picture films or slide opaques, e.g. for telecine

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

Unsteadiness in a video signal derived from cine film is corrected by using a motion vector measurement circuit 14 to derive from the input video signal displacement signals which represent the horizontal and vertical displacement of successive frames. A two-dimensional interpolator 18 forms an output video signal by interpolation from the input video signal under the control of a control circuit 16 in dependence upon the displacement signals. An exception detector 22 can detect pans, zooms and shot changes and inhibit the interpolation control when such occur. The motion vector measurement is made by determining the differences between adjacent pixels on the same line, corresponding pixels on adjacent lines, and corresponding pixels on adjacent frames, and combining the resultants in accordance with expressions (3) and (4) herein, which are based on a first-order-truncated Taylor expansion. <IMAGE>

Description

SPECIFICATION Measurement offllm unsteadiness in a video signal Asignificant proportion of film shown on television suffers from image unsteadiness. This impairment is particularly associated with 16 mm film and results from frame-to-frame differences in the position of the optical image on the transmission print. When scanned by the telecine, these frame-to-frame positional differences are usually manifest as a horizontal weaving motion (with a typical period of about 11/2 seconds) or as a vertical hopping motion.
Various potential sources ofthis unsteadiness have been identified and discussed, see in particulartwo papers presented at BKSTS Film '71 respectively by WRIGHT, D.T. "16 mm Film: Image steadiness in television presentation Part 1: The measurement of unsteadiness and prediction of subjective impairment", and SANDERS, J.R. "16 mm Film: Image steadiness in television presentation Part 2: Causes of unsteadiness", (BBC Research Department Reports Nos. 1971/28 and 1971/29). The sources of unsteadi nessincludefim positioning inaccuracy in the camera, perforation inaccuracy in the film stock, printer negative/positive positioning inaccuracy, and the telecine film transport-ofthese the high speed optical printer contributes most to image unsteadiness.
In principle, individual cameras, printers and telecines can be checked and adjusted to reduce their individual contributions to image unsteadiness; simi larlyitwould befeasibleto modify cameras so that they added a test pattern to the exposed image out-of-shot-a pattern which would be subject to the same treatment as the required image and could be used to measure and correctfor unsteadinesswhen the film is scanned inthetelecine. To perform these adjustments and modifications universally is wholly impracticalthe broadcaster does not in general have complete control on the sources and high-speed printing equipment with which the transmission print ofanyfilm has been exposed.
A method of compensating forfilm image unsteadiness using the video signal from the telecine is therefore highly desirable.
The invention is defined in the claims below to which reference should now be made.
The invention will be described in more detail by way of example, with reference to the drawings, in which: Figure lisa block circuit diagram of a basic unsteadiness compensator embodying the invention; Figure 2 is a block diagram of a modified unsteadi nesscompensatorforusewith RGB signals; Figure3is a diagram showing howazoom is characterised by motion being symmetrical about the optical centre of the display; Figure4is a block circuit diagram for measurement of the element to element difference EDIF; Figure 5is a vertical position versus time diagram illustrating the lines used in measurement of the line to line difference LDtF;; Figure6is a block diagram of a circuit givingsignals required to measure EDIF, FDIF and LDIF: Figure 7 is a block circuit diagram based on Figure 6 including second order terms; Figure 8 is a block diagram of a circuit used to form the products; and Figure 9 is a block diagram of a circuit used to accumulate the prnductterm Figure 1 shows an unsteadiness compensator 10 with an input 12. A motion vector measurement circuit 14 is connected to the input and provides an output to an interpolation control circuit 16, which controls a two-dimensional (vertical/horizontal) interpolator 18.
A compensating delay 20 provides an equivalent delay in the signal path to that of circuits 14 and 16.
Also connected to the input is an exception detector 22 which provides a second overriding input to the interpolation control circuit 16. The output 24 ofthe interpolator 18 constitutes the circuit output.
In Figure 2the signal path is split into separateR, G and B paths for red, green and blue signals. Separate interpolators are controlled in parallel. A combining circuit 26 forms a luminancesignal Yforthe control circuitry.
Central to a successful method of unsteadiness compensation is the measurement of the film image position on a frame-by-frame basis. Where an optical image is still available (e.g. in the optical printer or a modified telecine), a measurement ofthe frame-to- frame motion could be performed with respectto external references (sprocket-hole, frame bar, test pattern exposed out-of-shot in the camera etc.) However, once the optical image has been scannned, the image information and reference (in this case syncs) are locked together, sampled by the scanning process.A measure ofthe image unsteadiness can now only be obtained from the scanned signal itseif and-this measurement must be performed in the presence of moving picture detail and of random fluctuations caused by film grain, film surface defects (dirt, scratches etc.) and eledtrical noise. The measurement must also be made with respect to an assumed reference-for example, with respect to the mean position over a representative number of frames. The derived motion vectors can then be used to control the 2-dimensional interpolator, thus enabling each frame to be repositioned horizontally and vertically with sufficient accuracy to significantly reduce the visibility of unsteadiness.
As noted above, the unsteadiness measurement must be made in the presence offrame-to-frame difference caused by real motion within the scene and by random fluctuations. Apartfrom global scene movement such as pans and zooms, these spurious signals are likelyto be spatially localised whereas the unsteadiness represents a gross translation ofthe entire scene. By performing unsteadiness measurement on the entire scanned image the effects of spatially localised 'noise' signals are considerably diluted. Furthermore, by suitably dividing the measurement area and analysing the results of each measurement, the global scenic changes can be categorised and allowance made forthem, e.g. the interpolator might be disabled when pans and zooms, orshotchanges, are detected.
For example, a zoom detectorwould be possibte by performing motion measurement on thefourquarters ofthe image area, as indicated in Figure 3, and comparing the estimated motion vectors forthe four quarters. When the movement is symmetrical about the optical picture centre, as shown by the arrows, zooming is assumed to have taken place. To the extent thatthe arrows all point in the same direction, a pan may be presumed to have occurred. Detection can be improved by looking for systematic displacements between successive frames. Similarly, it will be necessary to relaxthe interpolator when a shotchange has been detected and known shot-change detector may be usedforthis purpose.
The use of an exception detector 22 sensitive to imagemovementcausedotherthan by unsteadiness enables a more sophisiticated motion detector to be used with less danger of serious errors being introduced.
Methods of detecting the steadiness will now be described.
Several methods for motion estimation might be considered, for instance: (i} Taylorexpansion-using atruncatedTaylor expansion oftheframe difference, based on a paper by NETRAVALI, A.N. and ROBBI NS, J.D. "Motioncompensated television coding: Part 1." B.S.T.J., Vol.
58, No. 3, March 1979.
(ii) Phase correlation -a phase correlation method, based on a paper by PEARSON, J.J., HINES, D.C., GOLDSMAN, S. and KUGLIN, C.D. "Video rate Image correlation processor", S.P.I.E. Vol. 119 Application of Digital Image Processing (IOCC 1977).
(III) Minimum frame difference-determination ofthe shift required in order to minimise the difference between successive frames.
With the Taylor expansion method, truncating the Taylorseries afterthefirstorderterms gives expressions forthe horizontal and vertical displacements as:
where FDIF is theframe difference (i.e. the difference between corresponding pixels on adjacent frames), and EDIF, LDIF are linear approximationsto the first derivatives with respectto x andy respectively (x defines the horizontal position, positive to the right andydefinesthevertical position, positive downwards), i.e. the differences between adjacent pixels on the same line and on adjacent lines respectively.
The derivation of these equations is given in Appendix A below.
The second, phase correlation, method finds an estimate ofthemotion be determining the point at whichthe correlation ofthe phase information of successive frames reaches a maximum. A more detailed explanation ofthe underlying mathematics is given in Appendix B.
M;nimisingtheframe differenceappears, atfirst, to be far simplerthan phase correlation. However, the two methods do have a fundamental similarity as shown in Appendix C.
It may seem that either method (ii) or(iii) would be favoured since both have the potential to measure even large displacements, unlike (i)which is only valid for sub-pixels shifts. But the ability to detect large shifts is of secondary importance when also considering the high cost of the hardware; when using the phase correlation methods or minimisingthe frame difference interpolators are required to enable measurement of non-integral pixel shifts. Furthermore, large frame-to-frame displacements will usually be intentional e.g. special effects simulating earthquakes and explosions, fast pans and shot changes. Such motion does not require correction.
Fast motion due to a pan could be identified bytaking the displacement estimates of several successive frames and noting the systematic nature oftheshifts.
Earthquake simulations, however, will be essentially random and could possibly appear as extremely unsteady film. Therefore, a system which fails to detect large displacements has the advantage that it reduces the complexity of any motion analyser.
Accordingly, the Taylor expansion-based method is preferred.
Finally, it should be noted that Appendix A also derives expressionstoestimate motion by taking the Taylor expansion to second order terms. However, computer simulations do not suggest that any overall increase in accuracy would be achieved although the arithmetic is significantly more complex.
The method of movement estimation will now be described with reference to the circuit diagram of Figure 4. Asgiven in the Appendix EDIF (x,yl = + 1 ,y) - l(x - 1, y)). Neglecting factors of 2 (which can be accounted for in the later calculations) EDIF is found by delaying the video signal by two pixels duration and subtracting from the undelayedsignal.
Thus, as shown in Figure 4, the input signal at terminal 12 are applied to two series-connected pixel (picture element) delays 32 and 34. The output of delay34 is applied to the inverting input of a su btractor 36 which receives the in put video at its non-inverting input, and provides a measure of EDIF at its output 38.
Similarly, LDIF (x, y) = 1/2(i(x, y + 1) -l(x,y-1)). If the picture were scanned sequentiallythen LDIF could be obtained simply by replacing the element delays in the circuit 30 shown in Figure 4 by line delays. However, in the case of interlaced scanning, the lines used in determining LDIF do not occur in the same field as the line about which LDIF is being measured. It should be remembered that, since the pictures are from film, the information on field 1 and field 2 does not represent the same instant in time.
Consider the two cases of LDIF for lines in field 1 and for lines in field 2 separately. Numbering the lines as in PAL system I, for pixels at line L7 in field 1: FDIF (x, L1) = I(x, L1)- I(x, L1 - 625) LDIF (x, Ll) = I(x, L, + 313)I(x, L1 + 312) andfor pixeis at line L2 in field 2: FDIF (x, L2) = lfx, L2) - l(x, L2 - 625) LDIF (x, L2) = I(x, L2 - 312) - l(x, L2 - 313).
Figure 5 shows that the last pixel required to find FDIF and LDIF is always in field 2 regardless of the field inwhichthe pixel occurs. Hence, when finding 2:FDIF.LDIF a delay of 1 field must be introduced into at least one set of values. Furthermore, since forfielt 1 values the lines needed for measurement of LDIF occur in field 2 (i.e. afterthe corresponding field 1 lines) whereas for field 2 values they occur in field 1 (i.e. before the corresponding field 2 lines), it is necessaryto switch between the two cases. Thus a signal which indicates whether the present field is odd or even is required. A block diagram of a circuit giving the required signals is shown in Figure 6.
Figure 6 thus shows a circuit 40 forming the first part of the motion vector measurement circuit 14 of Figure 1 and which is connected to the input 12. A series of delay elements 42a and 42i are used providing the following delaytimes: 42a one line 42b 311 lines 42c one line less one pixel 42d one pixel 42e one pixel 42f one line less one pixel 42g 311 lines 42h one line 42i 312 lines Some ofthe outputs, namely the input and the outputs of delays 42a, 42g and 42h are applied to a selector44 which receives a signal at an input 46 indicating whether an odd or even field is involved.
Using the terminology adopted above,forfield 1 the input and the output of delay 42a are used for L1 and L2 signals for use as LDIF, and for field 2 the outputs of delays 42g and 42h are employed instead. Corres ponding signals El and E2for EDIF are taken from delays 42e and 42c, and signals F1 and F2 for FDI F from delays 42i and 42d.
Since the pixels required to estimate the second derivatives occur in the samefield as the corresponding pixel for which the values are required, (see AppendixA) making provision for the measurement of second order terms does not significantly increase the circuit complexity. Therefore, the means four measuring these terms can easily be included. A block diagram ofthe resulting circuit is shown in Figure 7. In practice, however, little advantage was obtained bythe extra complexity. Figure 7 is similar to Figure 6 with essentiallythe only difference between the addition of some further delays in the delay chain 42 so asto make more signals available simultaneously. The delay values and signals are noted on the Figure which is nottherefore described in more detail.
The signals required for EDIF, LDIF and FDIF are then passed to two identical circuits 50, a block diagram of which is shown in Figure 8, which find the difference signals and form the products EDIF. FDIF, EDIF2 and LDIF.FDIF, LDIF. Twos-complement code is used in performing the differencing in subtractors 52,54. The results are then converted using gates 56 and adders 58 to sign-and-magnitude representation for multiplication since in this way 8-bit multipliers may be used. The multiplier 60 produces EDIF.FDIF and LDIF.FDIFwhiletheterm EDIF2 and LDIF2 are obtained from a PROM 62 addressed by the corresponding difference signals. These product terms are then converted backto twos-complement code using gates 64 and 66 and an adder 68 and fed to a circuit70 which accumulates the products as shown in Figure 9.This includes an input buffer register 72 for receiving the values to be summed, an accumuiator loop comprising an adder register 74 (also having a reset input) and a delay 76, and an output buffer register 78 for holding the accumulated output at the end of each frame. Since the first and last pixel of active line, first and last line of active picture and pixels which occur during blanking do not have useful values of EDIF and LDIF, a control signal(HOLD) is sent to the circuit to ensure that meaningless values are ignored. An additional signalTAKE is used to indicate the end of an active picture, at which summation is complete, and a signaCLEAR to clearthe registers beforethe next active frame.
The signals #EDIF, ELDIF2, ZEDIF.FDIF and #LDIF.FDIF from circuits as in figure 9 are then passed to a computing device which calculates valuesforEx and Ay in accordance with the equations (3) and (4) given above. The resultant constitutes the output of the motion vector measurement circuit 14 of Figure 1. Providing it is not inhibited bythe exception detector 22, the control circuit 16 then instructs the interpolator 18 to make the necessary shifts to compensate for the detected displacements.
Attention is drawn to the description and claims of our application No. 85 22347 (Serial No. 2,165,417) out of which this application is divided.
APPENDICES - Motion estimation algorithms Appendex A - Taylor expansion Consider a pixel at position (x, y), intensity l(x,y).
Definetheframe difference between successive frames as FDIF(x,y)=l(x,y,t)-l(x,y,t - #) (1) where T is the interval between successive frames, and where lines are numbered sequentially.
Suppose a pure translation (Ax,Ay) has occurred.
The part of the picture represented bythe pixel at position (x, y), time t - A, has moved to (x + Ax, y + Ay) bytimet.
Hence l(x,y,t - #) = l(x + #x,y + #y, t) (2) Substituting (2) in (1) gives FDIF(x,y)=l(x,y,t)-l(x + #x,y + #y,t) For small Ax, hyTaylor expansion gives
A 1. First order expansion Considering first order terms only then gives:
Ex and Sy may now be found using iinear regression:: Forthe general expression Zi=ss1ui + ss2vi + #i where #i is the error term #(#i)=#(Zi-ssiui-ss2vi) values of ss1 and ss2 to minimise the mean square error are found at
Linear approximations ofthe first derivatives as
A2 Second order expansion Ifsecond order terms are also used then the expression for FDIF becomes:
Writing this in the general form: Thus
As before, minimise
which gives and
In practice, it is found that some of these terms may be neglected and the equations reduced to:
It can be seen that this method results in two, simultaneous cubics.For simplicity, an approximate solution can be found by substituting estimates of ss2 and (3i (from first order considerations} into (5) and (6) respectively. This leaves two independentcubics which can then be solved analytically. Since cubics have three roots, the one nearest to the initial estimates is chosen.
Approximations to the first derivatives can be found as before. The second order derivatives are estimated in a similar manner i.e.:
similarly
Appendices B - Phase correlation method Letthetwo sampled pictures being compared be represented by g1 (x,y) and g2(x,y) with two dimensional discrete Fouriertransform of G1(m,n) and G2(m,n) respectively.
Assuming that pure translation (dx,dy) has occur- red then:
But elm is the transform of a delta function.
Therefore, computing
and finding the inverse transform will givea function with a maximum at (dx, dy). Of course, since g1#g2 #( G1 G2*, this is equivalent to finding the cross correlation ofthetwo pictures and then determining where the maximum occurs. A delta function will not result in practice unless noiseless, pure cyclically translated pictures are used.
It will be noted that using a low pass filter on the inverse transform will reduce spurious peaks due to noise.
Appendix C- Minim um frame difference In this method, the amount by which a picture is displaced from the reference picture is deduced by measuring the required shift ofthe reference in order to minimise #|FDIF| or #|FDIF|.
As in method (ii), let the picture be representeo g1 and g2. Consider the difference between pixel i in picture 1 and pixel i+k in picture 2.
not vary with k.
Therefore
Thus
is at a minimum when
is at a maximum.
However, the cross correlation of (g1)i and (g2)i+k is Ck = Ei((g1)i.(g2)i + k) where Ej= the expectation value.
Thus, finding the value of kwhich minimises E[((g1 ) - (g1)i+k)] is equivalent to finding the value of k for which Ck(=Ej((g1)j.(g2)i+k)) is a maximum.

Claims (3)

1. Apparatus for measuring unsteadiness in a video signal generated from cinematographic film, comprising an inputfor receiving a video signal generated from cinematographic film, means for deriving signals EDIF, LDIF and FDIF representing respectively the differences between adjacent pixels on the same line, corresponding pixels on adjacent lines, and corresponding pixels on adjacentframes, and means for combining these signals in accordance with a formula based on a truncated Taylor expansion to provide signals representative ofthe image displacement.
2. Apparatus according to claim 7, wherein the formula comprises expressions (3) and (4) as herein defined.
Amendments to the claims have been filed, and have the following effect: Claim 2 above has been textually amended.
New ortextually amended claims have been filed as follows:-
2. Apparatus according to claim 1, wherein the formula comprises expressions (3) and (4) as herein defined.
3. Apparatusformeasuring unsteadinessina video signal generated from a cinematographic film, substantially as herein described with reference to Figures4to9.
GB08708047A 1984-09-07 1987-04-03 Measurement of film unsteadiness in a video signal Expired GB2187913B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB08708047A GB2187913B (en) 1984-09-07 1987-04-03 Measurement of film unsteadiness in a video signal

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB848422716A GB8422716D0 (en) 1984-09-07 1984-09-07 Measurement and correction of film unsteadiness
GB08708047A GB2187913B (en) 1984-09-07 1987-04-03 Measurement of film unsteadiness in a video signal

Publications (3)

Publication Number Publication Date
GB8708047D0 GB8708047D0 (en) 1987-05-07
GB2187913A true GB2187913A (en) 1987-09-16
GB2187913B GB2187913B (en) 1988-02-10

Family

ID=26288198

Family Applications (1)

Application Number Title Priority Date Filing Date
GB08708047A Expired GB2187913B (en) 1984-09-07 1987-04-03 Measurement of film unsteadiness in a video signal

Country Status (1)

Country Link
GB (1) GB2187913B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4994918A (en) * 1989-04-28 1991-02-19 Bts Broadcast Television Systems Gmbh Method and circuit for the automatic correction of errors in image steadiness during film scanning
EP0458189A2 (en) * 1990-05-21 1991-11-27 Matsushita Electric Industrial Co., Ltd. Motion vector detecting apparatus and image stabilizer including the same
EP0458239A2 (en) * 1990-05-21 1991-11-27 Matsushita Electric Industrial Co., Ltd. Motion vector detecting apparatus and image stabilizer including the same
US5189518A (en) * 1989-10-17 1993-02-23 Mitsubishi Denki Kabushiki Kaisha Image blur correcting apparatus
GB2264606A (en) * 1992-02-28 1993-09-01 Rank Cintel Ltd A telecine providing improved image stability.
DE4236950C1 (en) * 1992-11-02 1994-03-24 Ulrich Dr Solzbach Method and device for processing and displaying image sequences
EP0666687A2 (en) * 1994-02-04 1995-08-09 AT&T Corp. Method for detecting camera motion induced scene changes
US5943090A (en) * 1995-09-30 1999-08-24 U.S. Philips Corporation Method and arrangement for correcting picture steadiness errors in telecine scanning

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4994918A (en) * 1989-04-28 1991-02-19 Bts Broadcast Television Systems Gmbh Method and circuit for the automatic correction of errors in image steadiness during film scanning
GB2239575B (en) * 1989-10-17 1994-07-27 Mitsubishi Electric Corp Motion vector detecting apparatus and image blur correcting apparatus, and video camera including such apparatus
US5189518A (en) * 1989-10-17 1993-02-23 Mitsubishi Denki Kabushiki Kaisha Image blur correcting apparatus
US5450126A (en) * 1989-10-17 1995-09-12 Mitsubishi Denki Kabushiki Kaisha Image blur correcting apparatus
EP0458239B1 (en) * 1990-05-21 1996-01-10 Matsushita Electric Industrial Co., Ltd. Motion vector detecting apparatus and image stabilizer including the same
EP0458239A2 (en) * 1990-05-21 1991-11-27 Matsushita Electric Industrial Co., Ltd. Motion vector detecting apparatus and image stabilizer including the same
US5317685A (en) * 1990-05-21 1994-05-31 Matsushita Electric Industrial Co., Ltd. Motion vector detecting apparatus and image stabilizer including the same
US5172226A (en) * 1990-05-21 1992-12-15 Matsushita Electric Industrial Co., Ltd. Motion vector detecting apparatus and image stabilizer including the same
EP0458189A2 (en) * 1990-05-21 1991-11-27 Matsushita Electric Industrial Co., Ltd. Motion vector detecting apparatus and image stabilizer including the same
EP0458189B1 (en) * 1990-05-21 1995-11-29 Matsushita Electric Industrial Co., Ltd. Motion vector detecting apparatus and image stabilizer including the same
GB2264606B (en) * 1992-02-28 1995-08-02 Rank Cintel Ltd Image stability in telecines
US5619258A (en) * 1992-02-28 1997-04-08 Rank Cintel Limited Image stability in telecines
GB2264606A (en) * 1992-02-28 1993-09-01 Rank Cintel Ltd A telecine providing improved image stability.
DE4236950C1 (en) * 1992-11-02 1994-03-24 Ulrich Dr Solzbach Method and device for processing and displaying image sequences
EP0666687A3 (en) * 1994-02-04 1996-02-07 At & T Corp Method for detecting camera motion induced scene changes.
EP0666687A2 (en) * 1994-02-04 1995-08-09 AT&T Corp. Method for detecting camera motion induced scene changes
US5943090A (en) * 1995-09-30 1999-08-24 U.S. Philips Corporation Method and arrangement for correcting picture steadiness errors in telecine scanning
DE19536691B4 (en) * 1995-09-30 2008-04-24 Bts Holding International B.V. Method and device for correcting image frame errors in television filming

Also Published As

Publication number Publication date
GB2187913B (en) 1988-02-10
GB8708047D0 (en) 1987-05-07

Similar Documents

Publication Publication Date Title
JP3518689B2 (en) Derivation of studio camera position and movement from camera images
GB2165417A (en) Measurement and correction of film unsteadiness in a video signal
US5510834A (en) Method for adaptive estimation of unwanted global picture instabilities in picture sequences in digital video signals
KR920001006B1 (en) Tv system conversion apparatus
KR940006048A (en) Image processing method and apparatus
KR930011725A (en) Motion compensation prediction method
EP0180446A2 (en) Methods of detecting motion of television images
KR100330797B1 (en) Motion Estimation Method and Apparatus Using Block Matching
MXPA04002210A (en) Image stabilization using color matching.
KR20050061556A (en) Image processing unit with fall-back
US8189054B2 (en) Motion estimation method, device, and system for image processing
GB2187913A (en) Measurement of film unsteadiness in a video signal
US5943090A (en) Method and arrangement for correcting picture steadiness errors in telecine scanning
EP0632915B1 (en) A machine method for compensating for non-linear picture transformations, e.g. zoom and pan, in a video image motion compensation system
US5619258A (en) Image stability in telecines
JPH08315151A (en) Method and circuit arrangement for undersampling in case of movement evaluation
JP2006215657A (en) Method, apparatus, program and program storage medium for detecting motion vector
Thomas Distorting the time axis: motion-compensated image processing in the studio
JP3252418B2 (en) Image shake determination device
JPH0795469A (en) Picture compensation device of camcorder
JP3903358B2 (en) Motion vector evaluation method and apparatus
EP0659022A2 (en) Detection of global translations between images
JPH0410885A (en) Movement vector detecting device
JPH01291581A (en) Motion correction type television cinema device
JP2600520B2 (en) Image motion compensation device

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20030909