EP0675643B1 - Method and apparatus for reducing conversion artefacts - Google Patents

Method and apparatus for reducing conversion artefacts Download PDF

Info

Publication number
EP0675643B1
EP0675643B1 EP19950104170 EP95104170A EP0675643B1 EP 0675643 B1 EP0675643 B1 EP 0675643B1 EP 19950104170 EP19950104170 EP 19950104170 EP 95104170 A EP95104170 A EP 95104170A EP 0675643 B1 EP0675643 B1 EP 0675643B1
Authority
EP
European Patent Office
Prior art keywords
motion
fields
field
backward
search window
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP19950104170
Other languages
German (de)
French (fr)
Other versions
EP0675643A1 (en
Inventor
Andrew Hackett
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Technicolor SA
Original Assignee
Thomson Multimedia SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Multimedia SA filed Critical Thomson Multimedia SA
Priority to EP19950104170 priority Critical patent/EP0675643B1/en
Publication of EP0675643A1 publication Critical patent/EP0675643A1/en
Application granted granted Critical
Publication of EP0675643B1 publication Critical patent/EP0675643B1/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0112Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level one of the standards corresponding to a cinematograph film standard
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0117Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving conversion of the spatial resolution of the incoming video signal
    • H04N7/012Conversion between an interlaced and a progressive signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0135Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
    • H04N7/014Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes involving the use of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • H04N5/145Movement estimation

Definitions

  • the present invention relates to a method and to an apparatus for reducing conversion artefacts when film source images are converted into interlace format pictures of at least doubled picture frequency.
  • 50Hz television systems transmit one complete image every 1/25th of a second, using a system known as interlace which transmits one half image every 1/50th of a second, followed by the other half image 1/50th of a second later.
  • Television cameras also operate using this interlace system, and so the original scene captured by the camera is displayed on a receiver with no motion problems.
  • This is represented diagrammatically in Fig. 1.
  • Time T is shown along the horizontal axis with the parallel arrows representing the sampling of the image which consists of alternate first FI1 and second FI2 fields. In this case, one half image is sampled every 1/50th of a second, that is the field period FIP.
  • Horizontal position X is shown along the vertical axis, and the length of the arrows represents the position of the object as it moves. By drawing a line through the tips of the arrows, the trajectory TOO of the object may be observed. For this example, an object is depicted moving horizontally with a constant velocity.
  • Film is a sequence of complete images, generated at a rate of one new image every 24 times of a second.
  • the projection rate is 25 images a second to give simple compatibility with the television system. Inaccuracies in movement speed and audio pitch are ignored.
  • Fig. 2 using the same conventions as for Fig. 1 and a frame period FRP.
  • the same trajectory TOO is traced by the object as it moves across the image.
  • the same frame of film is used to generate both interlace television fields. This results in the information carried by the second field being temporally displaced from the original by 20ms. This is shown in Fig. 3. This displacement gives rise to two perceptible artefacts in the received image.
  • the first is a double image, and arises from the expectancy of the human visual system that objects move with a constant or smoothly changing velocity.
  • the brain perceives two objects moving with the same velocity, separated by the distance which the original object travels in 20ms. This is represented by the double trajectory DTOO drawn through the tips of the arrows.
  • the second artefact is that of judder, caused by the apparent updating of the position of each object only once every 1/25th of a second, a time interval sufficiently long for individual images to be perceived as such rather than combining together to give the appearance of smooth motion.
  • This invention is suited for the removal of film judder in TV systems and may be applied either in a television receiver or video signal decoder or in the television studio before the television signal is transmitted.
  • the intermediate fields are generated from the original film frames by a process of motion compensated interpolation.
  • the motion compensation measures the velocities of moving objects and generates the intermediate fields having moving objects correctly positioned so as to remove the above mentioned artefacts and to create, in this example, a single corrected trajectory CST.
  • Fig. 4 Each original first field OFFI is followed by a motion compensated second field MCSFI depicted by a dashed line.
  • the invention can be used for 100Hz upconversion of a film source, suitable for display on a TV receiver with 100Hz scanning. In this case each original first field OFFI is followed by three interpolated fields MCSFI (dashed and dotted lines).
  • the inventive method is suited for reducing conversion artefacts when film source images are converted into interlace format of at least doubled picture frequency, wherein motion compensated fields of one type of said interlace format - e.g. second fields - are generated between non-motion compensated fields of the other type - e.g. first fields - of said interlace format.
  • the inventive apparatus is suited for reducing conversion artefacts when film source images are converted into interlace format of at least doubled picture frequency and includes:
  • FIG. 5 One implementation of an inventive apparatus is shown in Fig. 5.
  • this would be placed on the output of a telecine or other film projection device.
  • this would be enabled when film was being transmitted, either by a signalling means external to the video signal, or by a film mode detector, for example that described in EP-A-0567072.
  • a film mode detector for example that described in EP-A-0567072.
  • it is checked for every vertically adjacent pixel pair, if the amplitudes of the vertically intermediate pixels of the both adjacent fields lay between the amplitudes of the pixel pair.
  • the film mode is switched off if this specific number is exceeded.
  • the first field, or half image, OFFI of the input signal IN contains the information in the correct temporal position. This field undergoes no further processing and is output directly after a suitable compensating delay ⁇ 1.
  • the motion compensated second field MCSFI is generated by the subsequent processing.
  • the first processing step is to combine the two received fields in interlace-to-progressive converter means IPCS to recreate the original film image.
  • the function of IPCS is depicted in Fig. 6. Time T is again represented by the horizontal axis. The vertical axis represents the vertical position V.
  • the progressive scan image output frame OFR is recreated by delaying the first field FI1 with respect to the second field FI2 by one field period FIP.
  • the next step in the process is to calculate the motion of each point in the image within motion estimation means DSME. This may be done by a number of different means, using different types of motion estimation and a frame memory FRM1. As long as the result of the process is a motion vector associated with each point in the image, centred on the temporal position of the desired output field MCSFI, the precise means to provide these vectors is unimportant, aside from the fact that different methods will provide different levels of performance for different costs.
  • this motion estimation is done using double sided motion estimation based on the luminance signal, described fully in WO-A-95/07591 of the applicant.
  • this is a full search block matching process with the search window centered on the current block position CBP of the interpolated output field IFI, as shown in Fig. 7.
  • the backward half of the motion vector BHMV has a landing point within the backward search window BSW of a backward field BFI.
  • the forward half of the motion vector FHMV has a landing point within the forward search window FSW of a forward field FFI.
  • Each successive pixel block CBP of a current field or frame IFI is matched with respective candidate pixel blocks within search window BSW - which has a preselected window width - of the backward field or frame having a half of said preselected width and with the corresponding pixel blocks within the corresponding search window FSW of the forward field or frame having a half of said preselected width in order to select a motion vector, the first part of which is related to the location of the best matching block of the backward field or frame and the second part of which is related to the location of the best matching block of the forward field or frame, whereby the current position of said search windows BSW and FSW is related to the position of the current pixel block of said current field or frame.
  • the result of this block matching yields a number of candidate vectors for each block which are then post-processed in subsequent vector post-processing means VPP.
  • the motion vectors are corrected, i.e. are made reasonable, and smoothed.
  • This can be performed as described in EP-A-0 648 052 of the applicant to provide a final output vector for each pixel, representing the best estimate of the motion of that pixel.
  • a motion vector related to each block is calculated and for any pixel of the current block, a pixel motion vector is calculated using four motion vectors, that is the motion vector of the current block and the motion vectors of the three adjacent blocks.
  • the post-processing means VPP may also attempt to correct for the effects of repeated structures as described in EP-A-0 647 919 and localise the block based motion vectors to provide one motion vector per pixel.
  • Such correction of motion vectors includes the following steps:
  • a measure of confidence of the motion vector can be generated in VPP on a pixel by pixel basis as described in EP-A-0 648 047 of the applicant.
  • the confidence is low, there is a high probability that the motion vector is incorrect (for example, on material newly uncovered by the passage of a moving object for which no motion information can be correctly found).
  • a default motion vector of 0 in both the horizontal and vertical directions is taken.
  • signal paths with different interpolation processing - in particular motion compensated interpolation and fallback interpolation - are formed, whereby the output signals of said different signal paths are combined in relation to a measure of confidence which is derived from a minimum motion estimation error of the input video signal, which is in particular block based.
  • DSME may use a non-uniform measurement of candidate motion vectors as described in EP-A-0 639 926 of the applicant to calculate the motion vectors.
  • the accuracy of this measurement is made non-uniform within the search window by dividing the window into, at least, an area of high precision and an area of lower precision, whereby the density of the used pixel values of the input signal for the motion vector calculation is less than in the area of less precision.
  • the output vectors of VPP are applied to double sided motion compensated interpolation means DMCI, shown in more detail in Fig. 8, which may use a further frame memory FRM2 and which generate from the appropriately delayed (further delay means ⁇ 2) progressive picture signal a motion compensated output image IFI2 (progressive frame or field) in the temporal position of the second field FI2 of the television image.
  • Each output pixel IP at the position of zero vector ZV is the average of one pixel in the current film frame CFR and one pixel in the previous film frame PFR.
  • the two pixels used in the average are selected by applying the measured motion vector (one half HMMVP in frame PFR and one half HMMVC in frame CFR) for the desired output pixel.
  • this vector represents accurately the direction of motion DM
  • these two pixels will correspond to the same part of the same object in each frame.
  • a complete progressively scanned image can be generated in the position of field FI2.
  • output DMCI interpolates three intermediate motion compensated output images IFI2 (progressive frames or fields), whereby respective percentages of the components of the motion vectors can be applied to the interpolation of a specific of the three images.
  • the final step which may be combined in the interpolator DMCI by directly interpolating a field or three fields only, respectively, is to produce the output field, or fields, MCSFI from the interpolated information. This is done by taking every second line from the progressive image.
  • Switch means SW select either field FI1 or the newly generated field, or fields, MCSFI at the output OU of the inventive apparatus depending on the field being transmitted.
  • Colour information can be handled in exactly the same way as luminance for the interpolation process, using the motion vectors derived for the luminance signal.
  • the invention can also be used in a video signal decoder (TV, VCR, PC, CD-I, CD Video) if the transmitted or recorded signals have been coded using data reduction, e.g. MPEG1 or MPEG2.
  • the transmitted/recorded motion information can be used in, or instead of, circuits DSME and/or VPP.
  • field OFFI can be the second field FI2.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Television Systems (AREA)

Description

  • The present invention relates to a method and to an apparatus for reducing conversion artefacts when film source images are converted into interlace format pictures of at least doubled picture frequency.
  • Background
  • 50Hz television systems transmit one complete image every 1/25th of a second, using a system known as interlace which transmits one half image every 1/50th of a second, followed by the other half image 1/50th of a second later. Television cameras also operate using this interlace system, and so the original scene captured by the camera is displayed on a receiver with no motion problems. This is represented diagrammatically in Fig. 1. Time T is shown along the horizontal axis with the parallel arrows representing the sampling of the image which consists of alternate first FI1 and second FI2 fields. In this case, one half image is sampled every 1/50th of a second, that is the field period FIP. Horizontal position X is shown along the vertical axis, and the length of the arrows represents the position of the object as it moves. By drawing a line through the tips of the arrows, the trajectory TOO of the object may be observed. For this example, an object is depicted moving horizontally with a constant velocity.
  • Film, on the other hand, is a sequence of complete images, generated at a rate of one new image every 24 times of a second. When film is transmitted on television, the projection rate is 25 images a second to give simple compatibility with the television system. Inaccuracies in movement speed and audio pitch are ignored. This is shown in Fig. 2 using the same conventions as for Fig. 1 and a frame period FRP. The same trajectory TOO is traced by the object as it moves across the image.
    When film is transmitted by television, the same frame of film is used to generate both interlace television fields. This results in the information carried by the second field being temporally displaced from the original by 20ms. This is shown in Fig. 3. This displacement gives rise to two perceptible artefacts in the received image.
    The first is a double image, and arises from the expectancy of the human visual system that objects move with a constant or smoothly changing velocity. The brain perceives two objects moving with the same velocity, separated by the distance which the original object travels in 20ms. This is represented by the double trajectory DTOO drawn through the tips of the arrows.
    The second artefact is that of judder, caused by the apparent updating of the position of each object only once every 1/25th of a second, a time interval sufficiently long for individual images to be perceived as such rather than combining together to give the appearance of smooth motion.
  • Invention
  • It is one object of the invention to disclose a method for reducing or even eliminating motion judder when progressive source images are displayed in interlace format. This object is reached by the method disclosed in claim 1.
  • It is a further object of the invention to disclose an apparatus which utilises the inventive method. This object is reached by the apparatus disclosed in claim 7.
  • This invention is suited for the removal of film judder in TV systems and may be applied either in a television receiver or video signal decoder or in the television studio before the television signal is transmitted.
    To overcome these defects the intermediate fields are generated from the original film frames by a process of motion compensated interpolation. The motion compensation measures the velocities of moving objects and generates the intermediate fields having moving objects correctly positioned so as to remove the above mentioned artefacts and to create, in this example, a single corrected trajectory CST. This is shown in Fig. 4. Each original first field OFFI is followed by a motion compensated second field MCSFI depicted by a dashed line. Advantageously, the invention can be used for 100Hz upconversion of a film source, suitable for display on a TV receiver with 100Hz scanning. In this case each original first field OFFI is followed by three interpolated fields MCSFI (dashed and dotted lines).
  • In principle, the inventive method is suited for reducing conversion artefacts when film source images are converted into interlace format of at least doubled picture frequency, wherein motion compensated fields of one type of said interlace format - e.g. second fields - are generated between non-motion compensated fields of the other type - e.g. first fields - of said interlace format.
  • In principle, the inventive apparatus is suited for reducing conversion artefacts when film source images are converted into interlace format of at least doubled picture frequency and includes:
    • interlace-to-progressive converter means which reconvert each pair of interlaced fields to form a progressive signal corresponding to a frame of the original film source image;
    • motion estimation means which calculate motion vectors for each pixel in said recreated images;
    • subsequent vector post-processing means which correct the motion vectors calculated in said motion estimation means;
    • double sided motion compensated interpolation means which operate on the appropriately delayed recreated images using said corrected motion vectors;
    • switching means which provide the output signal either from an appr priately delayed input field or from a processed image from said interpolation means.
  • Advantageous additional embodiments of the inventive apparatus are resulting from the respective dependent claim.
  • Drawings
  • Preferred embodiments of the invention are described with reference to the accompanying drawings, which show in:
  • Fig. 1
    trajectory of a moving object in case of interlaced images from a camera source;
    Fig. 2
    trajectory of a moving object in case of images from a film source;
    Fig. 3
    trajectory of a moving object in case of interlace-displayed images from a film source;
    Fig. 4
    corrected trajectory of a moving object in case of interlace-displayed images from a film source, wherein every second field or three of four fields are motion compensated;
    Fig. 5
    inventive apparatus;
    Fig. 6
    progressive scan generation from film source being transmitted with interlaced pictures;
    Fig. 7
    two-sided block matching;
    Fig. 8
    double sided motion compensation.
    Preferred embodiments
  • One implementation of an inventive apparatus is shown in Fig. 5. In a studio environment this would be placed on the output of a telecine or other film projection device. In a television receiver this would be enabled when film was being transmitted, either by a signalling means external to the video signal, or by a film mode detector, for example that described in EP-A-0567072. In this detector it is checked for every vertically adjacent pixel pair, if the amplitudes of the vertically intermediate pixels of the both adjacent fields lay between the amplitudes of the pixel pair. These comparison results are combined within each field. For setting film mode the combination results must be equal to a specific pattern within a distinct number of fields. The reset criterion is less strong. Within a distinct number of fields a specific number of 'wrong' combination results may occur. The film mode is switched off if this specific number is exceeded.
    The first field, or half image, OFFI of the input signal IN contains the information in the correct temporal position. This field undergoes no further processing and is output directly after a suitable compensating delay τ1. The motion compensated second field MCSFI is generated by the subsequent processing.
    The first processing step is to combine the two received fields in interlace-to-progressive converter means IPCS to recreate the original film image. The function of IPCS is depicted in Fig. 6. Time T is again represented by the horizontal axis. The vertical axis represents the vertical position V. It can be seen that the progressive scan image output frame OFR is recreated by delaying the first field FI1 with respect to the second field FI2 by one field period FIP.
    The next step in the process is to calculate the motion of each point in the image within motion estimation means DSME. This may be done by a number of different means, using different types of motion estimation and a frame memory FRM1. As long as the result of the process is a motion vector associated with each point in the image, centred on the temporal position of the desired output field MCSFI, the precise means to provide these vectors is unimportant, aside from the fact that different methods will provide different levels of performance for different costs.
    In the embodiment described here, this motion estimation is done using double sided motion estimation based on the luminance signal, described fully in WO-A-95/07591 of the applicant. Essentially, this is a full search block matching process with the search window centered on the current block position CBP of the interpolated output field IFI, as shown in Fig. 7. The backward half of the motion vector BHMV has a landing point within the backward search window BSW of a backward field BFI. The forward half of the motion vector FHMV has a landing point within the forward search window FSW of a forward field FFI.
    Each successive pixel block CBP of a current field or frame IFI is matched with respective candidate pixel blocks within search window BSW - which has a preselected window width - of the backward field or frame having a half of said preselected width and with the corresponding pixel blocks within the corresponding search window FSW of the forward field or frame having a half of said preselected width in order to select a motion vector, the first part of which is related to the location of the best matching block of the backward field or frame and the second part of which is related to the location of the best matching block of the forward field or frame, whereby the current position of said search windows BSW and FSW is related to the position of the current pixel block of said current field or frame.
  • The result of this block matching yields a number of candidate vectors for each block which are then post-processed in subsequent vector post-processing means VPP. In this means, the motion vectors are corrected, i.e. are made reasonable, and smoothed.
    This can be performed as described in EP-A-0 648 052 of the applicant to provide a final output vector for each pixel, representing the best estimate of the motion of that pixel. For this estimation a motion vector related to each block is calculated and for any pixel of the current block, a pixel motion vector is calculated using four motion vectors, that is the motion vector of the current block and the motion vectors of the three adjacent blocks.
    For any pixel from error values related to the four block motion vectors several estimated errors can be calculated, taking into account the position of the pixel relative to the centre of each of the corresponding blocks, whereby the minimum of said estimated errors is taken to select the related of said four block motion vectors as the final motion vector for said pixel.
  • Advantageously, the post-processing means VPP may also attempt to correct for the effects of repeated structures as described in EP-A-0 647 919 and localise the block based motion vectors to provide one motion vector per pixel.
    Such correction of motion vectors includes the following steps:
    • evaluating error values which are related to a matching of said pixel blocks between different pictures of said video signal, whereby, additional to a basic minimum error, a further minimum error belonging to adjacent pixel positions, except directly adjacent pixel positions, is searched along the row, or column, containing the pixel position of said basic minimum error;
    • comparing said further minimum error with a preselected threshold, resulting in a periodic structure decision if such further minimum error is less then said threshold;
    • when a periodic structure in the picture content is detected, replacing the current motion vector corresponding to said basic minimum error by a motion vector of an adjacent pixel block, in particular either from the block to the left or from the block above, whichever yields the smaller error in the current block, or by taking the mean of these both vectors.
  • In addition, a measure of confidence of the motion vector can be generated in VPP on a pixel by pixel basis as described in EP-A-0 648 047 of the applicant. When the confidence is low, there is a high probability that the motion vector is incorrect (for example, on material newly uncovered by the passage of a moving object for which no motion information can be correctly found). In this case, a default motion vector of 0 in both the horizontal and vertical directions is taken.
    For such measure of confidence, signal paths with different interpolation processing - in particular motion compensated interpolation and fallback interpolation - are formed, whereby the output signals of said different signal paths are combined in relation to a measure of confidence which is derived from a minimum motion estimation error of the input video signal, which is in particular block based.
  • DSME may use a non-uniform measurement of candidate motion vectors as described in EP-A-0 639 926 of the applicant to calculate the motion vectors. The accuracy of this measurement is made non-uniform within the search window by dividing the window into, at least, an area of high precision and an area of lower precision, whereby the density of the used pixel values of the input signal for the motion vector calculation is less than in the area of less precision.
  • The range of the search in both the horizontal and vertical directions is only limited by constraints of cost and complexity.
  • The output vectors of VPP are applied to double sided motion compensated interpolation means DMCI, shown in more detail in Fig. 8, which may use a further frame memory FRM2 and which generate from the appropriately delayed (further delay means τ2) progressive picture signal a motion compensated output image IFI2 (progressive frame or field) in the temporal position of the second field FI2 of the television image. Each output pixel IP at the position of zero vector ZV is the average of one pixel in the current film frame CFR and one pixel in the previous film frame PFR. The two pixels used in the average are selected by applying the measured motion vector (one half HMMVP in frame PFR and one half HMMVC in frame CFR) for the desired output pixel. If this vector represents accurately the direction of motion DM, these two pixels will correspond to the same part of the same object in each frame.
    As shown in this embodiment, a complete progressively scanned image can be generated in the position of field FI2. In case of 100Hz output DMCI interpolates three intermediate motion compensated output images IFI2 (progressive frames or fields), whereby respective percentages of the components of the motion vectors can be applied to the interpolation of a specific of the three images.
    The final step, which may be combined in the interpolator DMCI by directly interpolating a field or three fields only, respectively, is to produce the output field, or fields, MCSFI from the interpolated information. This is done by taking every second line from the progressive image.
    Switch means SW select either field FI1 or the newly generated field, or fields, MCSFI at the output OU of the inventive apparatus depending on the field being transmitted.
  • Colour information can be handled in exactly the same way as luminance for the interpolation process, using the motion vectors derived for the luminance signal.
  • The invention can also be used in a video signal decoder (TV, VCR, PC, CD-I, CD Video) if the transmitted or recorded signals have been coded using data reduction, e.g. MPEG1 or MPEG2. In such cases the transmitted/recorded motion information can be used in, or instead of, circuits DSME and/or VPP.
  • In case of other film picture rates or field rates (120Hz) the numbers given can be adapted correspondingly.
  • When implementing the referenced matters into the invention the length of the therein-described field and frame delays is adapted correspondingly.
  • The order of usage of the fields can be exchanged, i.e. field OFFI can be the second field FI2.

Claims (8)

  1. Method for reducing conversion artefacts (DTOO) when film source images are converted into interlace format of at least doubled picture frequency, characterised in that motion compensated fields (MCSFI) of one type of said interlace format - e.g. second fields (FI2) - are generated (DMCI, PIDS, SW) between non-motion compensated fields (OFFI) of the other type - e.g. first fields (FI1) - of said interlace format.
  2. Method according to claim 1, wherein three motion compensated fields (MCSFI) are generated between each pair of said non-motion compensated fields (OFFI).
  3. Method according to claim 1 or 2, wherein said generation of motion compensated fields (MCSFI) takes place in a television receiver or video recorder or video decoder for correction after transmission or takes place in a studio for correction before transmission or recording.
  4. Method according to any of claims 1 to 3, wherein for determining (DSME) motion information for said motion compensation a full search block matching is performed with the search window centered on the current block position (CBP) of the interpolated output field (IFI), and wherein the backward half of the motion vector (BHMV) has a landing point within the backward search window (BSW) of a backward field (BFI) and the forward half of the motion vector (FHMV) has a landing point within the forward search window (FSW) of a forward field (FFI).
  5. Method according to any of claims 1 to 4, wherein said generation of motion compensated fields (MCSFI) is performed (DMCI, FRM2, PIDS) using double sided motion compensated interpolation.
  6. Method according to any of claims 1 to 5, wherein said film source images have been digitally coded using data reduction and respectively decoded prior to said conversion, in particular by using MPEG1 or MPEG2 coding and by using the motion information related to this coding for said motion compensation.
  7. Apparatus for reducing conversion artefacts (DTOO) when film source images are converted into interlace format of at least doubled picture frequency, using the method according to any of claims 1 to 6 and including:
    interlace-to-progressive converter means (IPCS) which reconvert each pair of interlaced fields to form a progressive signal corresponding to a frame of the original film source image (OFR);
    motion estimation means (DSME, FRM1) which calculate motion vectors for each pixel in said recreated images (OFR);
    subsequent vector post-processing means (VPP) which correct the motion vectors calculated in said motion estimation means;
    double sided motion compensated interpolation means (DMCI) which operate on the appropriately delayed (τ2) recreated images (OFR) using said corrected motion vectors;
    switching means (SW) which provide the output signal either from an appropriately delayed (τ1) input field (IN) or from a processed image from said interpolation means (DMCI, PIDS).
  8. Apparatus according to claim 7, wherein in said motion estimation means (DSME, FRM1) a full search block matching is performed with the search window centered on the current block position (CBP) of the interpolated output field (IFI), and wherein the backward half of the motion vector (BHMV) has a landing point within the backward search window (BSW) of a backward field (BFI) and the forward half of the motion vector (FHMV) has a landing point within the forward search window (FSW) of a forward field (FFI).
EP19950104170 1994-03-30 1995-03-22 Method and apparatus for reducing conversion artefacts Expired - Lifetime EP0675643B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP19950104170 EP0675643B1 (en) 1994-03-30 1995-03-22 Method and apparatus for reducing conversion artefacts

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
EP94400729 1994-03-30
EP94400729 1994-03-30
EP19950104170 EP0675643B1 (en) 1994-03-30 1995-03-22 Method and apparatus for reducing conversion artefacts

Publications (2)

Publication Number Publication Date
EP0675643A1 EP0675643A1 (en) 1995-10-04
EP0675643B1 true EP0675643B1 (en) 1999-07-21

Family

ID=26137483

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19950104170 Expired - Lifetime EP0675643B1 (en) 1994-03-30 1995-03-22 Method and apparatus for reducing conversion artefacts

Country Status (1)

Country Link
EP (1) EP0675643B1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0883298A3 (en) * 1997-06-04 2000-03-29 Hitachi, Ltd. Conversion apparatus for image signals and TV receiver
FR2773038B1 (en) * 1997-12-24 2000-02-25 Thomson Multimedia Sa METHOD AND DEVICE FOR INTERPOLATING IMAGES FROM MPEG-ENCODED VIDEO DATA
JP3596521B2 (en) * 2001-12-13 2004-12-02 ソニー株式会社 Image signal processing apparatus and method
US7425990B2 (en) 2003-05-16 2008-09-16 Sony Corporation Motion correction device and method
EP1592247A1 (en) 2004-04-30 2005-11-02 Matsushita Electric Industrial Co., Ltd. Block mode adaptive motion compensation

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0294962B1 (en) * 1987-06-09 1995-07-19 Sony Corporation Motion vector estimation in television images
JPH04501196A (en) * 1989-05-29 1992-02-27 フィリップス エレクトロニクス ネムローゼ フェンノートシャップ Circuit layout and interpolation circuit for reducing line and edge flicker of television images
DE4213551A1 (en) * 1992-04-24 1993-10-28 Thomson Brandt Gmbh Method and device for film mode detection

Also Published As

Publication number Publication date
EP0675643A1 (en) 1995-10-04

Similar Documents

Publication Publication Date Title
US5610662A (en) Method and apparatus for reducing conversion artifacts
US5221966A (en) Video signal production from cinefilm originated material
US7098959B2 (en) Frame interpolation and apparatus using frame interpolation
US6118488A (en) Method and apparatus for adaptive edge-based scan line interpolation using 1-D pixel array motion detection
US5642170A (en) Method and apparatus for motion compensated interpolation of intermediate fields or frames
KR950006774B1 (en) Motion compensation predictive method
US20070269136A1 (en) Method and device for generating 3d images
CN101416523B (en) Motion compensated frame rate conversion with protection against compensation artifacts
US8175163B2 (en) System and method for motion compensation using a set of candidate motion vectors obtained from digital video
JPH08275116A (en) Means and apparatus for converting interlaced video frame sequence into sequential scanning sequence
JP3222496B2 (en) Video signal processing device
US20090208123A1 (en) Enhanced video processing using motion vector data
KR20040054758A (en) improved spatial resolution of video images
JP2003163894A (en) Apparatus and method of converting frame and/or field rate using adaptive motion compensation
US7218354B2 (en) Image processing device and method, video display device, and recorded information reproduction device
EP2564588B1 (en) Method and device for motion compensated video interpoltation
JP2006504175A (en) Image processing apparatus using fallback
GB2240232A (en) Converting field rate of telecine signal
JPH11112940A (en) Generation method for motion vector and device therefor
JP2001024988A (en) System and device for converting number of movement compensation frames of picture signal
JP3864444B2 (en) Image signal processing apparatus and method
US5355169A (en) Method for processing a digital video signal having portions acquired with different acquisition characteristics
EP0675643B1 (en) Method and apparatus for reducing conversion artefacts
JP2004519901A (en) Easy motion estimation
US6094232A (en) Method and system for interpolating a missing pixel using a motion vector

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): DE FR GB IT

17P Request for examination filed

Effective date: 19960308

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: THOMSON MULTIMEDIA

17Q First examination report despatched

Effective date: 19980421

GRAG Despatch of communication of intention to grant

Free format text: ORIGINAL CODE: EPIDOS AGRA

GRAG Despatch of communication of intention to grant

Free format text: ORIGINAL CODE: EPIDOS AGRA

GRAG Despatch of communication of intention to grant

Free format text: ORIGINAL CODE: EPIDOS AGRA

GRAH Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOS IGRA

GRAH Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOS IGRA

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): DE FR GB IT

REF Corresponds to:

Ref document number: 69510851

Country of ref document: DE

Date of ref document: 19990826

ET Fr: translation filed
ITF It: translation for a ep patent filed

Owner name: BARZANO' E ZANARDO MILANO S.P.A.

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed
REG Reference to a national code

Ref country code: GB

Ref legal event code: 746

Effective date: 20010813

REG Reference to a national code

Ref country code: GB

Ref legal event code: IF02

REG Reference to a national code

Ref country code: FR

Ref legal event code: D6

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20110405

Year of fee payment: 17

Ref country code: IT

Payment date: 20110316

Year of fee payment: 17

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20110322

Year of fee payment: 17

Ref country code: GB

Payment date: 20110329

Year of fee payment: 17

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20120322

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

Effective date: 20121130

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 69510851

Country of ref document: DE

Effective date: 20121002

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20120322

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20120402

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20120322

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20121002