GB2422974A - De-interlacing of video data - Google Patents

De-interlacing of video data Download PDF

Info

Publication number
GB2422974A
GB2422974A GB0502375A GB0502375A GB2422974A GB 2422974 A GB2422974 A GB 2422974A GB 0502375 A GB0502375 A GB 0502375A GB 0502375 A GB0502375 A GB 0502375A GB 2422974 A GB2422974 A GB 2422974A
Authority
GB
United Kingdom
Prior art keywords
pixel
correlation data
correlation
deriving
missing line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0502375A
Other versions
GB0502375D0 (en
Inventor
Paolo Guiseppe Fazzini
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Imagination Technologies Ltd
Original Assignee
Imagination Technologies Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Imagination Technologies Ltd filed Critical Imagination Technologies Ltd
Priority to GB0502375A priority Critical patent/GB2422974A/en
Publication of GB0502375D0 publication Critical patent/GB0502375D0/en
Priority to US11/125,416 priority patent/US20060176394A1/en
Priority to EP06709633A priority patent/EP1847124A2/en
Priority to PCT/GB2006/000387 priority patent/WO2006082426A2/en
Priority to JP2007553699A priority patent/JP2008529436A/en
Publication of GB2422974A publication Critical patent/GB2422974A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0117Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving conversion of the spatial resolution of the incoming video signal
    • H04N7/012Conversion between an interlaced and a progressive signal

Abstract

A method and apparatus are provided for converting an interlaced video signal to a non-interlaced video signal. For each pixel in each missing line of a video field in a video signal, correlation data is derived for each of the set of possible interpolations to be used in reconstructing the pixel in the missing line. A correlation corresponding to the interpolation scheme likely to give the best results for the missing pixel is selected and an interpolation scheme selected in dependence on this. The pixel in the missing line is then interpolated. The correlation data used in the selection of interpolation schemes is derived from data in the same field as the pixel to be reconstructed and from temporally spaced fields. Correlation data such as that calcualated by the sum of absolute differences (SAD) method or by calculating the mean square error. The correlation indicating the lowest difference from sampled pixels is used to select the most appropriate interpolation.

Description

DE-INTERLACING OF VIDEO DATA
This invention relates to a method and apparatus for de-interlacing or scan converting an interlaced video signal to a progressive scan or deinterlaced video signal.
Broadcast television signals are usually provided in an interlaced form.
For example, the phase alternate line (PAL) system used in Europe is made of video frames comprising two interlaced fields. Each field comprises alternate lines of the frame. Thus when the signal is applied to a display, the first field will be applied to the odd numbered lines of the display followed by the second field being applied to the even numbered lines of the display. Frame rate, the rate at which frames comprising two interlaced fields are applied to a display is usually 50 Hz, and therefore the field rate is 100 Hz. Thus, if each field is converted to whole frame of video data, i.e. the missing lines in each field are somehow generated, the effective frame rate will be 100 Hz. It also has the advantage of increasing the resolution of the television picture.
In US patent no. 5532751, a method is disclosed for evaluating the variation between pixels in an image to detect edges or contours. If the variation between pixels is below a threshold, then the orientation of an edge is estimated and a new pixel is formed from the average of the pixel's line along the estimated orientation. If the estimate of edge orientation is unsuccessful then a new pixel is formed from the average of two vertically aligned pixels with respect to the pixel to be derived. This technique has the drawback that it can generate visible artefacts in pixels which have two or more pairs of pixels with a high mutual resemblance.
An improvement on this method is described in US patent no. 6133957. In this, the variation between pixels or a set of pixels is computed to reconstruct borders. Two variations are chosen among those with the smallest values and the pixel to be reconstructed is generated as a
I
weighted average of the pixels which produce the selected variations.
Again, this technique can cause visible artefacts in very detailed scenes.
These can be even more noticeable when the amount of motion in the scene is low. In our British patent application no. 2402288, a solution is proposed. In this, the vertical frequencies present in the image data are preserved for de-interlacing when clear information on borders is not available.
The problem of de-interlacing can be appreciated from Figure 1 in which a plot of the colour (luminance) of the pixels with respect to their position within the frame is shown. X and Y are the co-ordinates of a pixel and Z is the pixel luminance. The white stripes in the X plane represent the lines of pixels for which data is available from a field and the grey stripes represent the missing lines i.e. the lines to be reconstructed. The grey projected surface in the Z axis is the luminance values of the known pixels with a surface interpolated between the known values. In de-interlacing, or finding the values of the pixels in the missing lines, an attempt is made to increase the resolution of this projected surface.
All the techniques discussed above for border reconstruction share the common feature of retrieving input data from one instant of time. The missing information is then reconstructed in the surface of Figure 1 using data from one instant of time only, i.e. from the current field.
Other methods have been proposed to de-interlace video data using also temporal information. The best known of these are motion compensationbased schemes. In all these schemes which use motion compensation, the purpose is to detect the movement of many objects in a scene and translate this movement into time. Such an approach is particularly effective when the motion present is mainly translational for example when deformations and rotations are slow enough to be well approximated with substantially straight translations over a small number of fields.
The problem with motion compensation techniques is that in some cases even slow-moving objects can present a degree of deformation or rotation which is capable of yielding reconstruction problems. This can result in flickering or high vertical frequencies even in static scenes. These types of visible artefacts are particularly noticeable to a viewer.
In static or almost static scenes where visible artefacts such as these appear, reconstruction methods based on information coming from one instant of time only (one field) such as a border reconstructor are unable to give good performance. Furthermore, techniques based on motion compensation do not provide sufficiently good results when objects which are deform ing are in the scene and more generally when the motion cannot be efficiently approximated with translation vectors.
Preferred embodiments of the present invention provide a geometric approach to the reconstruction or interpolation of pixels of missing lines as a video field which performs very effectively with slow motion.
More particularly, if we consider the situation of Figure 1 where the surface represented relies exclusively on spatial data, then the object of the border reconstruction procedure is to refine the surface by finding the best compromise between frequencies and accuracy. Ideally, the reconstruction will yield a surface which contains higher spatial frequencies than the input field (the grey lines extended in the Y axis) whilst avoiding artefacts which are not present in the input surface. For instance, if an input surface is substantially constant with timing fluctuations, an output surface of the type shown in figure 1 with a large spike in it will not generally be an acceptable output.
In accordance with an embodiment of the present invention there is provided a generalised approach to border reconstructors using spatial and temporal data. Thus, this system uses data from the current field as
well as from at least the adjacent fields.
A preferred embodiment of the invention will now be described in detail by way of example with reference to the accompanying drawings in which: Figure 1 shows the projected surface discussed above; Figure 2 shows in the vertical and time directions, the positions of lines and missing lines on a number of successive fields on image data; Figure 3 shows schematically the type of analysis which is made when determining how best to interpolate missing pixels in a single field; Figure 4 shows in the vertical and time directions, the positions of additional data points which may be generated for use in interpolating missing pixels; Figure 5 shows an alternative selection of data points for use in interpolating missing pixels; and Figure 6 shows a block diagram of an embodiment of the invention.
In the arrangement of Figure 3, three different possible interpolation schemes are shown and correlations are evaluated for these. The middle scheme shown comprises correlation of the data and pixels above and below the pixel to be reconstructed and correlation data between pairs of pixels immediately adjacent to this. A further interpolation is evaluated in the left hand example of Figure 1 by looking at the correlation between pixels on lines which pass diagonally sloping down to the right through the pixel being reconstructed. The same process with the opposite diagonals is shown in the right-hand example of Figure 3.
The correlation between the data and the various pairs of pixels can be derived using the sum of absolute differences (SAD) or the mean square error (MSE), or other well-known statistical techniques. The sum of absolute differences and the means square error are determined in a wellknown manner.
The input to the SAD and MSE derivations are the luminances of the pixels in the lines above and below the pixel to be reconstructed in a field.
The graph on the right-hand side of Figure 3 shows an example of SAD based procedure using five pixels only for each row and three correlations of symmetrically located sets of pixels, each made up of three pixel pairs.
In practice, more pixels are involved in the computation to ensure greater accuracy. Preferably between seven and thirty pixel pairs are used.
If the SAD approach to comparing the values of pairs of pixels is used then Figure 3 needs three SAD values, SAD 0, SAD I and SAD 3 which are shown graphically at the right-hand side of Figure 3. This is the correlation curve for the various possible interpolation schemes. In many techniques, the interpolation scheme which gives the smallest difference in SAD or the smallest MSE is used for the interpolation, although in practice it does not always give the best answer.
Turning now to Figure 4, this shows a small portion of three consecutive fields in the vertical direction against time. Thus, the central field has data present on the upper and lower pixels and a missing pixel to be reconstructed in the central position. The adjacent fields have no data on the upper and lower lines but do on the central line.
Using the arrangement of Figure 3 with Figure 4, would involve making the correlation for only the current field, i.e. the central field.
The embodiment of the present invention also uses the data from adjacent fields. This can be used in the manner shown in Figure 4 by generating additional data points shown in grey between the two fields. Each of these is generated from the nearest pair of two pixels which carry data in the fields between which it falls. Thus, the four pixels which are to be used in determining how best to generate the missing pixel are first used to generate data points on lines between their positions. These are average values. The correlation process of Figure 3 can then be performed on each diagonally opposed pair of new data points for each pixel in each line of the image. This will then produce two sets of correlation data for each pixel to be reconstructed. The correlation data which indicates the best chance of generating a closely correct value for the missing pixel is then selected from each set of correlation data and an interpolation scheme corresponding to that correlation selected for interpolation of the missing pixel for each set of correlation data. If the correlation analysis is an SAD analysis, then the correlation which gives the lowest value of SAD will be selected to determine the interpolation scheme.
When the best interpolation scheme from each SAD set of data has been selected and the missing pixel data interpolated, using each of the two selected schemes, and then interpolate between the results from the two schemes to give the resultant output. If more vertically or temporallyspaced pixels are used as input, and more correlations are performed then this can be extended by forming an interpolation between the two or more interpolation schemes determined by the correlation data to produce the best resultant data for a missing pixel.
An alternative scheme is shown in Figure 5. In this, rather than constructing mid points between the lines, the correlations are performed on the vertically adjacent lines and on the temporally adjacent lines from adjacent fields. This avoids the need for any additional circuitry for generation of mid points and in most cases gives good results.
In either the example of Figure 4 or Figure 5, the interpolation and correlation schemes could be expanded to take account of lines and fields which are further spaced from the pixel to be reconstructed. In some cases, this will improve the quality of the reconstructed image.
By using this approach, a coherent continuity is given to the space time surface around the pixel to be reconstructed.
Figure 6 shows a block diagram of a system appropriate for implementing the scheme shown in Figure 5. This can be modified with the addition of extra units to generate the mid points and Figure 4.
Input video data is fed through three field stores, 2, 4, and 6. Field store 4 contains the field with the missing lines which are to be reconstructed, referred to as the current field. Thus, at the start of the video sequence, a first field will be fed to field store 2, then to field store 4, then to field store 6 and processing will commence. The process will continue with fields moving from field store 2 to field store 4, field store 4 to field store 6, and the next field in the sequence being fed to field store 2.
Data is read out from field store 4 to a first line store 8 and then to second line store 10. Thus, a line is first read by line store 8 passed to line store 10, and a second line fed to line store 8. The two line stores then contain the two immediately adjacent lines to the missing line in the current field.
Next, for each pixel in turn to be reconstructed for the field in field store 4, a correlation unit 12 performs a sequence of correlations for the different interpolations which may be used to generate the missing pixel. This is done in a manner similar to that illustrated in Figure 3 but with more interpolation schemes being used to produce correlations. The resultant correlation data is fed to the best correlation selector 14 which selects the correlation likely to give the best interpolation scheme for generating the missing pixel. The output of this is then used by an interpolation scheme selector 16 to select the interpolation which corresponds to the correlation selected by the best correlation selector 14. This correlation scheme is then loaded into an interpolator 18. This also receives data from the line stores 8 and 10 after any necessary delays 20. Thus the interpolator receives the pixel data required to perform the interpolation for the missing pixel.
At the same time, a line from each of field stores 2 and 6 are read to further line stores 22 and 24 respectively. These comprise the lines which are spaced in time by one field from the line which is being reconstructed.
In a similar manner to the process applied to the data from line stores 8 and 10, a correlation unit 26 performs a series of correlations on the data in line stores 22 and 24, i.e. the possible pixels to be used in reconstructing a missing pixel for field store 4. The results of these correlations are set to a best correlation selector 28 which selects the correlation most likely to give the best result. For example, this could be the lowest SAD correlation. The output of the best correlation selector 28 is then used by an interpolation scheme selection 30 to select an interpolation scheme corresponding to the best correlation. This interpolation scheme is then loaded into an interpolator 32 which receives data from line stores 22 and 24 after any appropriate delay 34 and performs the selected interpolation on data from line stores 22 and 24 to produce data for the missing pixel. This occurs for each pixel in turn, substantially at the same time as the process operating on data from field store 4.
The results from the interpolators 18 and 32 are fed to a further interpolator 34. This performs an interpolation between the two interpolated pixels to derive an output pixel which is provided to a frame store 36 which also receives data from line store 10 corresponding to the known lines of the current field for each field in turn. Once this frame store is full, the resultant video signal can be sent to a display 38 or can be stored.
Preferably the whole process takes place in real time so that it can be performed on a video signal being received by a television receiver which converts the signal into a non-interlaced form ready for display.
Preferably, the system of Figure 6 is included in a television receiver so that new receivers including this system can display a higher resolution version of an interlaced signal.
In an improvement on the arrangement of Figure 6, two or more sets of the hardware be provided operating in parallel on different lines of the field stores 2, 4, and 6 to improve processing speed.
In an alternative, the system Figure 6 can be implemented in a dedicated processor. Two or more dedicated processors can be provided in parallel to improve the speed of processing. One possibility is to have a processor available for each of the missing lines of the field in field store 20 to thereby minimise processing time. This of course would make the unit more expensive.
In an alternative to the arrangements of Figures 4 and 5, and consequently the system of Figure 6, a four-input correlation could be made between vertically adjacent pixels and temporally adjacent pixels for a number of different possible interpolations between these pixels.

Claims (18)

1. A method for converting an interlaced video signal to a non- interlaced video signal comprising the steps of:- for each pixel in each missing line of a video field in a video signal deriving correlation data for each of a set of possible interpolators to be used in reconstructing the pixel in the missing line; selecting a correlation corresponding to the interpolation likely to give the best result for the missing pixel; and selecting an interpolation scheme for the pixel in the missing line in dependence on the selected correlation; and interpolating the pixel in the missing line with the selected interpolation scheme; wherein the step of deriving correlation data comprises deriving correlation data from the field containing the missing line and from
adjacent fields.
2. A method according to claim I in which the step of deriving correlation data for each of a set of possible interpolation schemes comprising deriving correlation data from pixels in the same field as the pixel in the missing line, and deriving correlation data from fields
temporally spaced from that field.
3. A method according to claim 2 in which the step of deriving correlation data from pixels in the same field comprises deriving a set of correlation data each correlation in the set corresponding to a different interpolation scheme.
4. A method according to claims 2 or 3 in which the step of deriving correlation data from temporally spaced fields comprises deriving a set of correlation data each correlation in the set corresponding to a different interpolation scheme.
5. A method according to claim 3 or 4 in which the step of selecting an interpolation scheme comprising selecting a first interpolation scheme from the set of correlation data derived from pixels in the same field, and second interpolation scheme from the set of correlation data derived from
temporally spaced fields.
6. A method according to claim 5 in which the step of interpolating the pixel in the missing line comprises interpolating the first pixel data with the first selected interpolation scheme, interpolating the second pixel data with the second selected interpolation scheme, and interpolating the pixel in the missing line from the first and second pixel data.
7. A method according to claim I including the step of deriving a set of correlation data points corresponding to data points to be used in interpolating a pixel in a missing line of a video signal, the set of correlation data points being derived from contributions from pixels in a current field containing the missing line and from pixels in temporally
spaced fields.
8. A method according to claim 7 in which at least four correlation data points are derived for each pixel in a missing line.
9. Apparatus for converting an interlaced video signal to a noninterlaced video signal comprising: means which for each pixel in each missing line of a video field in a video signal derives correlation data for each of a set of interpolations to be used in reconstructing the pixel in a missing line; means for selecting a correlation corresponding to the interpolation likely to give the best result for the missing pixel; means for selecting an interpolation scheme for the pixel in the missing line in dependence on the selected correlation; and means for interpolating the pixel in the missing line with the selected interpolation scheme; wherein the means for deriving correlation data comprises means for deriving correlation data from the field containing the missing line and
from adjacent fields.
10. Apparatus according to claim 9 in which the means for deriving correlation data for each of the set of possible interpolation schemes comprises means for deriving correlation data from pixels in the same field as the pixel in the missing line, and means for deriving correlation data
from fields temporally spaced from that field.
11. Apparatus according to claim 10 in which the means for deriving correlation data from pixels in the same field comprises means for deriving a set of correlation data, each correlation in the set corresponding to a different interpolation scheme.
12. Apparatus according to claim 10 or 11 in which the means for deriving correlation data from temporally spaced fields comprises means for deriving a set of correlation data, each correlation in the set corresponding to a different interpolation scheme.
13. Apparatus according to claim 11 or 12 in which the means for selecting an interpolation scheme comprises means for selecting the first interpolation from the set of correlation data derived from pixels in the same field, and a second interpolation scheme from the set of correlation
data derived from temporally spaced fields.
14. Apparatus according to claim 13 in which the means for interpolating the pixel in the missing line comprises means for interpolating first pixel data with the first selected interpolation scheme, means for interpolating second pixel data with the second selected interpolation scheme, and means for interpolating the pixel in the missing line from the first and second pixel data.
15. Apparatus according to claim 9 including means for deriving a set of correlation data points corresponding to data points to be used in interpolating a pixel in a missing line on a video signal, the set of correlation data points derived from contributions from pixels in a current field containing the missing line and from pixels in temporally spaced
fields.
16. Apparatus according to claim 9 in which the means for deriving a set of correlation data points derives at least four correlation data points for each pixel in a missing line.
17. A method for converting an interlaced video signal to a noninterlaced video signal substantially as herein described.
18. Apparatus for converting an interlaced video signal to a noninterlaced video signal substantially as herein described with reference to Figure 6 of the drawings.
GB0502375A 2005-02-04 2005-02-04 De-interlacing of video data Withdrawn GB2422974A (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
GB0502375A GB2422974A (en) 2005-02-04 2005-02-04 De-interlacing of video data
US11/125,416 US20060176394A1 (en) 2005-02-04 2005-05-09 De-interlacing of video data
EP06709633A EP1847124A2 (en) 2005-02-04 2006-02-06 De-interlacing of video data
PCT/GB2006/000387 WO2006082426A2 (en) 2005-02-04 2006-02-06 De-interlacing of video data
JP2007553699A JP2008529436A (en) 2005-02-04 2006-02-06 Video data deinterlacing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0502375A GB2422974A (en) 2005-02-04 2005-02-04 De-interlacing of video data

Publications (2)

Publication Number Publication Date
GB0502375D0 GB0502375D0 (en) 2005-03-16
GB2422974A true GB2422974A (en) 2006-08-09

Family

ID=34355815

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0502375A Withdrawn GB2422974A (en) 2005-02-04 2005-02-04 De-interlacing of video data

Country Status (5)

Country Link
US (1) US20060176394A1 (en)
EP (1) EP1847124A2 (en)
JP (1) JP2008529436A (en)
GB (1) GB2422974A (en)
WO (1) WO2006082426A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012038685A1 (en) 2010-09-23 2012-03-29 Imagination Technologies Limited Method and apparatus for deinterlacing video data

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7697073B2 (en) * 2005-12-06 2010-04-13 Raytheon Company Image processing system with horizontal line registration for improved imaging with scene motion
EP2114068A1 (en) * 2008-04-30 2009-11-04 Sony Corporation Method for converting an image and image conversion unit
US9076230B1 (en) 2013-05-09 2015-07-07 Altera Corporation Circuitry and techniques for image processing

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0735748A2 (en) * 1995-03-27 1996-10-02 AT&T Corp. Method and apparatus for converting an interlaced video frame sequence into a progressively-scanned sequence
US5886745A (en) * 1994-12-09 1999-03-23 Matsushita Electric Industrial Co., Ltd. Progressive scanning conversion apparatus
US20020196362A1 (en) * 2001-06-11 2002-12-26 Samsung Electronics Co., Ltd. Apparatus and method for adaptive motion compensated de-interlacing of video data
US20040263685A1 (en) * 2003-06-27 2004-12-30 Samsung Electronics Co., Ltd. De-interlacing method and apparatus, and video decoder and reproducing apparatus using the same

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5532751A (en) * 1995-07-31 1996-07-02 Lui; Sam Edge-based interlaced to progressive video conversion system
MY117289A (en) * 1996-01-17 2004-06-30 Sharp Kk Image data interpolating apparatus
US6133957A (en) * 1997-10-14 2000-10-17 Faroudja Laboratories, Inc. Adaptive diagonal interpolation for image resolution enhancement
GB2402288B (en) * 2003-05-01 2005-12-28 Imagination Tech Ltd De-Interlacing of video data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5886745A (en) * 1994-12-09 1999-03-23 Matsushita Electric Industrial Co., Ltd. Progressive scanning conversion apparatus
EP0735748A2 (en) * 1995-03-27 1996-10-02 AT&T Corp. Method and apparatus for converting an interlaced video frame sequence into a progressively-scanned sequence
US20020196362A1 (en) * 2001-06-11 2002-12-26 Samsung Electronics Co., Ltd. Apparatus and method for adaptive motion compensated de-interlacing of video data
US20040263685A1 (en) * 2003-06-27 2004-12-30 Samsung Electronics Co., Ltd. De-interlacing method and apparatus, and video decoder and reproducing apparatus using the same

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012038685A1 (en) 2010-09-23 2012-03-29 Imagination Technologies Limited Method and apparatus for deinterlacing video data
US8891012B2 (en) 2010-09-23 2014-11-18 Imagination Technologies, Limited De-interlacing of video data

Also Published As

Publication number Publication date
US20060176394A1 (en) 2006-08-10
EP1847124A2 (en) 2007-10-24
WO2006082426A2 (en) 2006-08-10
GB0502375D0 (en) 2005-03-16
WO2006082426A3 (en) 2007-01-18
JP2008529436A (en) 2008-07-31

Similar Documents

Publication Publication Date Title
US6118488A (en) Method and apparatus for adaptive edge-based scan line interpolation using 1-D pixel array motion detection
JP5657391B2 (en) Image interpolation to reduce halo
US6240211B1 (en) Method for motion estimated and compensated field rate up-conversion (FRU) for video applications and device for actuating such method
US8189105B2 (en) Systems and methods of motion and edge adaptive processing including motion compensation features
US20020171759A1 (en) Adaptive interlace-to-progressive scan conversion algorithm
EP1143712A2 (en) Method and apparatus for calculating motion vectors
US20050243216A1 (en) Block mode adaptive motion compensation
US20100177239A1 (en) Method of and apparatus for frame rate conversion
EP1766970A1 (en) Method and apparatus for deinterlacing interleaved video
US7499102B2 (en) Image processing apparatus using judder-map and method thereof
EP1511311B1 (en) Method and system for de-interlacing digital images, and computer program product therefor
JP3245417B2 (en) Method and apparatus for assigning motion vectors to pixels of a video signal
EP1723786A1 (en) Motion compensation deinterlacer protection
KR20070030223A (en) Pixel interpolation
US20060176394A1 (en) De-interlacing of video data
JP5139086B2 (en) Video data conversion from interlaced to non-interlaced
US8891012B2 (en) De-interlacing of video data
KR100588902B1 (en) Video deinterlace apparatus and method
KR100616164B1 (en) Apparatus and method for de-interlacing adaptively field image by using median filter

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)